Blog

Slow credential abuse: detecting the attacks UEBA was built for but misses anyway

|

Why UEBA was supposed to fix this

User and Entity Behavior Analytics emerged in the mid-2010s as a response to the limitations of rule-based detection. The pitch was reasonable. Build a baseline of normal behavior for each user and entity. Flag deviations. Catch the attacks that signatures miss. The category gained traction because the failure modes of signatures were obvious and the math behind UEBA looked sound. Statistical baselines are a real technique, well-understood in operations research and finance.

The problem is specific to the security domain and shows up clearly under adversarial conditions.

Why UEBA fails against patient attackers

Three structural properties explain the failure.

Threshold drift. UEBA baselines update over time. They have to. Users change roles, new applications get deployed, infrastructure migrates, accounts get reused. A static baseline produces unmanageable false positive volume within weeks. So baselines drift forward, incorporating new behavior. An attacker who escalates slowly enough is incorporated into the baseline along with everything else.

Per-entity isolation. UEBA models user A, user B, and host X separately. The math is correct: a normal pattern for one entity is not necessarily normal for another. The cost is that UEBA cannot reason about activity across entities. An attacker who uses one account to authenticate and a different account to act is harder for UEBA to catch than for an analyst staring at the right query.

Volume of the long tail. Users do unusual things every day. Real users travel, change devices, run new tools, miss days, return on weekends. UEBA either tunes high (catching attackers and missing too many real anomalies) or tunes low (catching obvious attacks while incorporating subtle ones into the baseline). Neither setting solves the problem.

The category did not fail because the math is wrong. It failed because the math does not match the adversary's incentives. Adversaries who want to stay in an environment for months are willing to operate slowly. UEBA is built to catch fast deviations.

What slow credential abuse looks like in logs

Three patterns recur and are worth describing concretely.

Service account compromise followed by patient use. A build pipeline service account has broad read access across cloud storage. Attacker compromises it, then uses it minimally. Half a dozen reads per week, all to paths the account legitimately accesses for builds, mixed with the legitimate access. Over six months the attacker reads selected sensitive content from each accessible bucket. The volume is below any threshold. The accounts touched are all legitimate. The data leaves through the pipeline's normal egress paths.

Credential rotation as an attack vector. Attacker establishes persistence via a long-lived API key. Defender rotates the key, but the rotation process itself uses an automated workflow. Attacker watches the workflow, captures the new key during rotation, and continues access seamlessly. Every authentication looks correct because every authentication uses a valid current credential.

Privilege escalation through accumulation rather than elevation. Attacker does not request admin rights. The account is added to an existing group as part of a routine access review, then to another group three weeks later, then to another. Each addition has a documented business reason, often a fabricated one. After several months the account has effective admin access without ever having been granted admin access.

In each case, no individual event deviates strongly from the baseline. The pattern only becomes visible when activity is read in sequence, with context, across entities.

What it took to build a model that catches this

Detecting slow credential abuse requires a model that sees activity across entities, retains context across long time horizons, and reasons about sequence rather than rate. Building that into LogLM was a multi-track engineering investment.

Cross-entity training data. Pretraining on per-entity sequences alone reproduces the UEBA limitation. Training data has to include cross-entity activity patterns, where the model sees how an event on one entity relates to events on others. Curating this data and ensuring it included both legitimate cross-entity activity (federation, delegation, automation) and adversarial cross-entity activity (lateral movement, credential reuse) was a substantial labeling effort.

Time horizon extension. Standard transformer context windows are not enough for slow attacks. The architecture had to handle long sequences efficiently, with engineering work to make the inference cost tractable at the volumes seen in production environments.

Embedding stability over baseline drift. The model's embeddings need to remain stable as legitimate behavior evolves. If the embedding for normal service account behavior drifts at the same rate the UEBA baseline drifts, the same problem appears. We invested in regularization and adaptive learning techniques that update the model's understanding without absorbing slow malicious drift into the legitimate region of embedding space.

Domain-expert labeling at the long tail. Most of the highest-value training labels for slow credential abuse came from domain experts who recognized patterns that automated labeling would miss. This is slow, expensive work. It is also where the accuracy delta against UEBA originates.

These investments are why building a vertical foundation model for security is different from prompting GPT against a SIEM. The architecture, the data, the labels, and the iteration loop are all domain-specific. There is no shortcut.

How DeepTempo's LogLM catches slow credential abuse

Two properties of the architecture matter for this category. The model represents activity as embeddings that capture meaning, not rate. A single read of a sensitive file by a service account that has never read sensitive files before lands in a different region of embedding space than the same read by an account that does that all day. UEBA cannot easily express this difference because it is cross-entity. The LogLM expresses it natively.

The model retains and uses sequence context across long horizons. An access today, evaluated against the same account's activity over the previous quarter, can sit in the same region of embedding space whether the access came one day after the previous one or one month after. Threshold-evading activity does not evade the embedding.

In production this shows up as detections that fire on patterns SOC teams genuinely cannot see in their existing tools. Slow service-account exfiltration, credential rotation captures, and privilege accumulation are all detectable categories in DeepTempo deployments.

Table of contents

See the threats your tools can’t.

DeepTempo’s LogLM works with your existing stack to uncover evolving threats that traditional systems overlook — without adding complexity or replacing what already works.