Security operations centers are drowning in indicators. IP addresses, file hashes, domain names, and registry keys. Millions of specific data points that signature-based detection matches against observed activity. The fundamental problem is not the volume of indicators but the approach itself. By the time indicators are available, attacks are well underway. Initial compromise has occurred. Lateral movement is progressing. The attacker already has what they came for. Detection based on indicators is structurally reactive, always catching attacks after critical damage occurs.
The shift security teams must make is from detecting known-bad indicators to detecting malicious intent before it fully manifests. This is not about better threat intelligence or faster indicator feeds. It is about recognizing behavioral patterns that reveal attacker objectives in progress, often before specific indicators of compromise exist in threat intel feeds. The technology to detect intent exists today. The challenge is organizational: accepting that indicator-based detection is insufficient and committing to fundamentally different approaches.
Why indicators arrive too late
The indicator lifecycle creates unavoidable detection gaps. An attacker uses a novel tool or technique. That activity eventually gets observed by a victim organization or security researcher. Analysis extracts indicators: file hashes, network signatures, registry artifacts. Those indicators get published to threat intelligence feeds. Organizations consume those feeds and update detection rules. This process takes days at minimum, often weeks. During that entire period, the attack technique is completely invisible to indicator-based detection.
The situation worsens as attackers adapt. Sophisticated adversaries now rotate infrastructure specifically to outpace indicator feeds. Domains are used for hours or days, then abandoned. Malware is generated polymorphically. Each instance has different file hashes, different byte patterns, different network signatures. By the time indicators for one variant are published, attackers have moved to new variants. Indicator-based detection becomes a permanent game of catch-up that defenders cannot win.
Living-off-the-land techniques break indicator-based detection completely. When attackers use PowerShell, WMI, RDP, and other legitimate administrative tools, there are no malicious indicators. The tools are legitimate. The protocols are standard. The file hashes belong to Microsoft Windows components. Indicator-based detection sees nothing wrong because it can only match specific known-bad patterns. It cannot reason about whether legitimate tools are being used for malicious purposes.
AI acceleration makes this worse. Automated tools generate infinite attack variants faster than human analysts can extract and publish indicators. Autonomous agents adapt techniques based on what defenses they encounter. Generative models create custom malware that has never existed before and will never be used again. The indicator-based paradigm, which catalogs known threats and matches against them, fails completely when threats are constantly novel and specifically designed to evade known patterns.
What early attacker intent actually looks like
Attacker intent manifests in behavioral progressions before specific indicators exist. The progression follows logical necessity. Attackers must understand the target environment before they can exploit it effectively. They must establish command and control before coordinating multi-stage operations. They must escalate privileges before accessing sensitive systems. They must move laterally before reaching high-value targets. Each stage leaves behavioral traces in network telemetry even when specific indicators are absent or useless.
Reconnaissance intent appears as connection patterns probing network topology and service availability. An attacker does not need to use malicious scanning tools. Normal connection attempts to multiple systems, services, or ports reveal reconnaissance regardless of protocol or tool. The behavioral signature is the pattern: many connection attempts, systematic enumeration, focus on discovering what exists and what is accessible. This intent is visible in flow metadata. Which systems contacted which other systems? What services were probed? What sequence the probing followed?
C2 establishment intent shows up in timing and volume characteristics before specific C2 infrastructure can be identified. Regular connection intervals indicate beaconing even when the destination domain or IP is not yet known to be malicious. Small outbound requests with larger responses suggest command delivery and output retrieval. Sustained connections to external destinations without clear business purpose indicate C2 regardless of whether that specific destination is in threat intelligence feeds. The behavioral pattern reveals intent: this system is receiving remote instructions and reporting results.
Lateral movement intent appears as sequential access progression. Initial compromise might be a workstation. The attacker authenticates to a server. Then from that server to another server. Then to a domain controller. Each authentication is technically legitimate. Stolen credentials authenticate successfully, producing no indicator of credential compromise. But the progression reveals intent: systematic movement from initial foothold toward high-value systems. This is visible in authentication logs and network flows showing which systems accessed which other systems in what sequence.
Credential access intent manifests before credentials are actually stolen or used. Processes accessing memory regions where credentials are stored, registry access to security hives, network traffic suggesting password spraying. These behaviors reveal intent to obtain credentials. The specific credentials obtained might never be visible to security tools (in-memory access leaves minimal traces), but the attempt to access credentials is observable in system activity and network patterns.
Data staging and exfiltration intent appears in access patterns and transfer volumes. Suddenly accessing file shares or databases that were never previously accessed. Copying large volumes of data from production systems to temporary locations. Compressing archives in unusual locations. Transferring data to external destinations that have no documented business relationship. Each individual action might have legitimate explanations, but the sequence reveals intent: someone is preparing to steal data or actively stealing it.
Foundation models reading long-sequence behavior
Understanding attacker intent requires analyzing activity sequences over time, not evaluating individual events in isolation. A single authentication is benign. Authentication from system A to system B to system C to system D within twenty minutes suggests lateral movement. A process accessing one file is routine. A process accessing thousands of files across multiple shares in rapid succession suggests data collection.
Traditional security tools struggle with sequence analysis because it requires maintaining context over long time windows, hours to days, and correlating events across multiple systems and data sources. Rule-based correlation engines can encode specific sequences someone thought to write rules for. But attackers use infinite sequence variations. Writing explicit rules for all possible malicious progressions is impractical.
DeepTempo identifies attacker intent from the earliest deviations in system activity, without relying on rules, signatures, or anomaly thresholds. The model learned from millions of attack sequences what malicious progression looks like at multiple levels. It understands individual events (unusual for this system), event sequences (progression toward sensitive assets), and strategic patterns (reconnaissance leading to exploitation leading to lateral movement leading to exfiltration).
This learning enables detecting intent from behavioral patterns rather than requiring specific indicators. When the model sees a workstation that typically only accesses a handful of internal services suddenly connecting to dozens of systems across multiple network segments, it recognizes reconnaissance intent even if the specific tool or technique is novel. The behavioral progression, systematic enumeration of accessible systems, reveals intent regardless of implementation details.
Critically, the model generalizes to novel attacks because it learned conceptual patterns, not specific indicators. An attacker using a custom tool that no one has documented still exhibits recognizable behavior: if conducting reconnaissance, the tool generates connection patterns probing multiple systems. If establishing C2, the tool creates regular communication to external destinations. If moving laterally, the tool's usage creates authentication and connection sequences progressing through the network. The intent is detectable even when the specific tool is completely unknown.
Case examples of intent visible before indicators
Consider an APT group using completely custom tooling for initial compromise. No file hashes match known malware. No network signatures match documented C2 protocols. Traditional indicator-based detection sees nothing. But intent is visible in behavior.
The compromise begins with phishing, delivering a payload that establishes foothold. Flow data shows the compromised workstation beginning to contact an external IP address with regular timing. The specific destination is not in threat intelligence feeds, it is new infrastructure the APT group registered yesterday. But the beaconing pattern reveals C2 intent: connections every 60 seconds, small request sizes, consistent destination, sustained over hours. Intent detection flags this before any indicator of the specific C2 infrastructure exists.
The attacker begins reconnaissance using built-in Windows commands. No malware is deployed. But the compromised workstation suddenly queries Active Directory to enumerate users and groups, attempts connections to dozens of internal systems it never previously contacted, and accesses file shares across multiple departments. Each individual action uses legitimate tools and protocols. But the progression reveals reconnaissance intent: systematic enumeration inconsistent with this workstation's normal behavior.
Lateral movement proceeds via legitimate administrative tools. The attacker uses stolen credentials to access additional systems via RDP and PowerShell remoting. Authentication logs show successful logins. The credentials are valid, generating no authentication failure alerts. But the progression reveals lateral movement intent: sequential access from the compromised workstation to a server to a domain controller, occurring within twenty minutes, from an account that previously only accessed the workstation. The behavioral sequence indicates intentional progression toward privileged systems.
Data exfiltration uses approved cloud storage services. The attacker uploads stolen data to OneDrive using legitimate credentials. No malicious domains are contacted. No prohibited protocols are used. But the intent is visible: a system that normally uploads dozens of megabytes per day suddenly uploads gigabytes, to a cloud account registered just days ago, immediately following the reconnaissance and lateral movement observed earlier. The behavioral pattern, reconnaissance, privilege escalation, data staging, exfiltration to new external destination reveals the complete attack intent from initial compromise through data theft.
In each stage, specific indicators either did not exist (custom tools, new infrastructure) or were useless (legitimate tools and protocols). But intent was visible in behavioral progressions observable in network telemetry and system logs. Intent-based detection caught the attack at multiple stages before it completed. Indicator-based detection would have seen nothing until after data was stolen, if ever.
The transition is urgent
Organizations still relying primarily on indicator-based detection face structural disadvantages that worsen daily. Adversaries are adopting AI tools that generate infinite variants, rotate infrastructure rapidly, and adapt techniques based on observed defenses. Indicator-based detection was already challenged by sophisticated human attackers. It is inadequate against AI-augmented adversaries.
The gap between attack occurrence and indicator availability continues to widen. Faster threat intelligence feeds do not solve the fundamental problem, they just narrow the gap slightly while attackers widen it further by accelerating their tool and infrastructure rotation. The indicator paradigm is losing ground permanently.
Organizations transitioning to intent-based detection now gain lead time to deploy systems, develop operational procedures, train teams, and tune detection before facing ubiquitous AI-augmented attacks. Those delaying face implementing major detection architecture changes during active breaches, under pressure, without adequate preparation.
The technology for intent-based detection exists and is deployable today. Foundation models trained on attack sequences and benign traffic can reason about behavioral progressions that reveal malicious intent. Flow-based telemetry captures the network evidence these models analyze. Integration frameworks correlate multiple data sources into coherent context. The capability is available. The question is whether organizations will deploy it proactively or reactively.
Security teams that make this shift from chasing indicators to detecting intent, position themselves to catch attacks that traditional tools miss completely. Those that do not will continue generating alerts for known threats while missing novel attacks, custom tooling, and sophisticated adversaries operating with intent visible in their behavior but absent from indicator databases. The future of effective detection is intent recognition. Organizations must make that transition now or fall further behind adversaries who have already adapted beyond what indicator-based detection can catch.