Blog

Why network threats are winning

|

The Encryption Blind Spot

Your network security stack is impressive on paper. Intrusion Detection Systems (IDS) analyzing traffic patterns. Network Detection and Response (NDR) platforms watching for anomalies. Security Information and Event Management (SIEM) systems correlating events. Millions of dollars invested in protecting your network perimeter and internal traffic flows.

Yet sophisticated attackers are walking right past these defenses. Not because your tools are misconfigured or your team is incompetent, but because the fundamental approach to network threat detection is broken.

The problem isn't what these tools do. It's what they can't see.

The Encryption Paradox

Encryption has become ubiquitous across modern networks, and for good reason. With cybercrime costs projected to hit $10.5 trillion annually by 2025, protecting data in transit is essential. Organizations have rightfully embraced HTTPS, TLS, and encrypted communication protocols to safeguard sensitive information.

But this necessary security measure created an unexpected consequence: it blinded our primary network detection systems.

Traditional IDS platforms like Snort, Suricata, and Zeek were built on a fundamental assumption that they could inspect packet payloads. These systems analyze the actual content of network traffic, looking for malicious patterns in the data being transmitted. They match packet contents against signature databases containing known attack patterns.

When that content is encrypted, payload inspection becomes impossible. The IDS sees encrypted data streams but cannot examine what's inside. It's like trying to detect contraband in sealed, opaque containers without being able to open them.

Network security teams attempted to solve this by deploying SSL/TLS decryption appliances. These devices decrypt traffic, inspect it, then re-encrypt it before forwarding. But this approach creates its own problems: massive performance overhead, privacy concerns, regulatory compliance issues, and the reality that many modern protocols and applications simply won't work when subjected to man-in-the-middle decryption.

The result? Most organizations decrypt only a small fraction of their network traffic, leaving the majority uninspected and unmonitored.

Signatures Only Catch Yesterday's Threats

Even when IDS platforms can inspect traffic, their signature-based detection model faces insurmountable limitations.

Signature-based detection works by matching observed traffic patterns against a database of known malicious indicators. When traffic matches a signature for a known exploit, vulnerability, or attack tool, the system generates an alert. This approach works reasonably well against threats that someone has already documented and created signatures for.

The operative word is "known." Signature-based systems are inherently reactive. They detect attacks that security researchers have previously identified, analyzed, and built detection rules for. Novel attacks, zero-day exploits, and custom malware created specifically for your organization sail through undetected because no matching signature exists.

With ransomware attacks demonstrating an 81 percent year-over-year increase from 2023 to 2024, attackers are innovating faster than signature databases can keep pace. By the time vendors update their signature sets with new threat patterns, attackers have already moved on to variations the signatures don't cover.

Consider the timeline: an attacker develops a new exploitation technique, successfully uses it against targets, security researchers eventually discover it, analyze it, create detection signatures, vendors distribute those signatures, and organizations deploy the updates. This process takes weeks or months, during which the attack technique remains completely invisible to signature-based detection.

Living off the Land: Invisible by Design

The most sophisticated attackers have largely abandoned custom malware that signatures might catch. Instead, they use Living off the Land (LOTL) techniques that leverage legitimate tools already present in your environment.

PowerShell, Windows Management Instrumentation (WMI), remote desktop protocols, legitimate administrative utilities. These tools exist in every enterprise environment because IT teams need them for system management and automation. When attackers use these same tools, the network traffic looks completely legitimate.

A signature-based IDS watching for malicious payload patterns has nothing to match against. The PowerShell commands executing across the network? Legitimate tool. The WMI queries enumerating systems? Standard administrative activity. The file transfers using built-in protocols? Normal business operations.

LOTL attacks don't have signatures because they don't use anything inherently malicious. They abuse legitimate functionality in malicious ways. This fundamental difference makes them invisible to detection approaches built around matching known-bad patterns.

The Anomaly Detection Deluge

Recognizing the limitations of signature-based detection, the security industry pivoted toward anomaly-based approaches. Modern NDR platforms from vendors like Darktrace, Vectra, and ExtraHop analyze network behavior and flag deviations from normal patterns.

This sounds promising in theory. Instead of looking for known-bad signatures, watch for unusual behavior that might indicate an attack. The system learns what normal looks like in your environment, then alerts on anything abnormal.

In practice, anomaly-based detection creates a different nightmare: overwhelming noise.

Networks are inherently noisy environments. Legitimate users behave unpredictably. Applications generate unusual traffic patterns for non-malicious reasons. Infrastructure changes modify what counts as "normal." Every deviation from established patterns triggers an anomaly alert, and most of those deviations are harmless.

Security analysts report drowning in anomaly alerts. Thousands of "weird" events flagged daily, requiring manual investigation to determine which represent actual threats versus benign deviations. This creates the same problem signature-based systems were supposed to solve: too much noise obscuring the genuine signals that matter.

The core issue? Anomaly-based systems identify what's unusual but provide no reliable way to distinguish unusual-but-malicious from unusual-but-benign. Everything different gets flagged, forcing analysts to become the correlation engine manually triaging vast quantities of low-value alerts.

The Maintenance Treadmill

Both signature-based and anomaly-based network detection systems require constant human maintenance to remain effective.

Signature-based IDS platforms need continuous rule updates. Security teams write custom detection rules for threats specific to their environment. They tune existing rules to reduce false positives. They retire outdated rules that no longer apply. This rule maintenance consumes massive percentages of detection engineers' time, creating an endless treadmill of updates just to maintain current detection capabilities.

Anomaly-based NDR platforms require equally intensive tuning. As environments evolve, what the system learned as "normal" becomes outdated. New applications, infrastructure changes, user behavior shifts, all require updating baseline models. Without constant tuning, anomaly systems either generate increasing false positives or miss attacks that blend with the evolved "new normal."

Both approaches assume security teams have unlimited time to dedicate to detection system maintenance. With 67% of organizations reporting staffing shortages, this assumption breaks down. Teams stretched thin cannot keep pace with the maintenance burden these systems demand, leading to degraded detection capabilities over time.

Coverage Gaps in Modern Environments

Network detection challenges compound in modern hybrid environments spanning on-premises infrastructure, multiple cloud providers, and numerous SaaS applications.

Traditional IDS platforms were designed for networks where all traffic flowed through central inspection points. Modern architectures don't work that way. Workloads communicate directly with cloud services. SaaS applications exchange data peer-to-peer. Remote workers connect from anywhere. East-west traffic within cloud environments bypasses traditional network chokepoints.

Achieving comprehensive network visibility requires deploying detection capabilities everywhere: on-premises, in each cloud environment, monitoring SaaS interactions, covering remote endpoints. This deployment complexity creates coverage gaps where attacks slip through unmonitored.

Cloud-native detection tools like AWS GuardDuty and Azure Sentinel attempt to address this by providing visibility within their respective cloud environments. But they create new silos. Network security teams manage traditional IDS. Cloud security teams manage cloud-native detection. SaaS security falls to yet another team. Nobody has unified visibility across the complete network.

Attackers exploit these gaps systematically. Initial access through one environment, lateral movement to another, data exfiltration via a third. The fragmented detection systems each see pieces of the attack but none see the complete narrative.

The AI Adversary Advantage

Sophisticated threat actors have begun leveraging artificial intelligence to generate attack variations faster than detection systems can adapt. AI-enhanced reconnaissance identifies vulnerabilities automatically. Machine learning models generate polymorphic malware that changes its characteristics with each deployment. Automated tools test defenses continuously, learning which techniques succeed.

Traditional network detection, whether signature-based or anomaly-based, operates at human speed. Security researchers identify threats, analyze them, build detection logic, deploy updates. This cycle takes weeks or months.

AI-powered attacks operate at machine speed. They test, adapt, and evolve continuously. They generate variants faster than detection signatures can be created. They learn which behaviors trigger anomaly alerts and modify their approach to blend in.

The speed mismatch is fundamental. Human-curated detection logic cannot keep pace with machine-generated attack evolution. Yet most network security stacks still rely on detection approaches that require human intervention for every adaptation.

Why This Matters Now

Network-based attacks aren't slowing down. More than 30,000 vulnerabilities were disclosed last year, a 17 percent increase from previous figures, and attackers actively exploit these gaps. Organizations face increasingly sophisticated adversaries using AI-enhanced tools and LOTL techniques specifically designed to evade traditional detection.

Meanwhile, security teams struggle with the tools they have. Alert fatigue from anomaly-based systems. Maintenance burden from signature-based platforms. Coverage gaps across hybrid environments. Blindness to encrypted traffic. Inability to detect novel attacks.

The fundamental approach to network threat detection is broken. Not because the tools are poorly implemented, but because the underlying detection models cannot address modern threat realities.

Encryption makes payload inspection impossible. Signatures only catch known threats. Anomalies generate overwhelming noise. LOTL attacks appear identical to legitimate traffic. AI adversaries evolve faster than human-curated detection logic.

Organizations need a fundamentally different approach to network threat detection. One that works with encrypted traffic rather than requiring decryption. One that identifies malicious behavior without relying on signatures or generating overwhelming false positives. One that adapts at machine speed to keep pace with evolving attacks.

The technology exists to make this shift. But first, security leaders need to recognize that their current network detection approach, no matter how well-funded or expertly managed, cannot protect against threats specifically designed to evade it.

Your impressive security stack isn't failing because you're doing something wrong. It's failing because the detection model itself is obsolete. Understanding this problem is the first step toward fixing it.

See the threats your tools can’t.

DeepTempo’s LogLM works with your existing stack to uncover evolving threats that traditional systems overlook — without adding complexity or replacing what already works.

Request a demo
Empowering SOC teams with real-time collective AI-defense and deep learning to stop breaches faster.
Built by engineers and operators who’ve lived the challenges of security operations, we deliver open, AI-native software that runs on any data lake—freeing teams from legacy constraints. Our LogLMs return control to defenders, enabling faster, smarter, and more collaborative responses to cyber threats.