Subject working example

Working example

Finding all the identities behind the alert

The Alert chapter handed Subject one named user (dlin, a finance analyst) and one host (laptop-finance-09) connected to a suspicious PowerShell event. Subject's job is to find every other identity that the alert touches and assess each one.

Step 1. Start with the named user

The alert names dlin. The Subject phase begins by profiling that user explicitly:

  • Identity verification. The account is provisioned, matches an HR record, and is in the expected finance role.
  • Authentication history. Recent logins look normal until the day of the alert. Then a 09:08 login from the usual workstation, followed by the alert at 09:11.
  • Authorization. dlin has read access to finance reporting systems and the standard productivity stack. No administrative privileges.
  • Behavior. Activity profile is consistent with a finance analyst role. PowerShell is not part of dlin’s normal pattern.

Three of the four dimensions agree. The behavior dimension does not. PowerShell on this account is the first concrete signal that the alert is more than a misfire.

Step 2. Expand to non-human identities

Subject does not stop at the named user. Every identity touched by the activity gets profiled.

  • The host. laptop-finance-09 is a managed corporate device, enrolled in the EDR, with a baseline of activity matching dlin’s role.
  • The service account. Pre-attack, the laptop authenticated against a domain-joined identity that itself authenticates services on behalf of the user. The service account’s recent activity becomes part of the picture.
  • The Outlook session. The parent process of the PowerShell invocation. Each Outlook session has its own token, mailbox identity, and audit trail.
  • The cloud identity. dlin’s federated SSO identity for the cloud-based finance reporting platform. Has the same recent-login pattern as the on-premises account.

One named user has become six identities, three that get full primary-depth investigation (dlin, the laptop, dlin’s federated cloud identity) and three that get secondary-depth (the service account, plus two other finance users who, as Step 4 will surface, also touched the originating email). Each one is a potential lateral-movement vector. Each one needs its own assessment.

Step 3. Apply the four dimensions

The four dimensions are now applied to every identity in the list, not just dlin.

  • Authentication. All four identities authenticated normally. No impossible travel, no MFA bypass, no token replay signals.
  • Authorization. No identity exceeded its assigned permissions during the relevant window. Authorization checks pass.
  • Behavior. dlin’s user account behaved abnormally (PowerShell). The host’s process baseline shows no prior PowerShell use under this user. The service account and cloud identity show no unusual activity.
  • Relationships. The four identities are tightly connected: same user, same device, same SSO chain. No surprising new connections to entities outside that cluster.

What a less experienced analyst could miss: applying the dimensions only to the named user. The service account and cloud identity are easy to overlook because the alert did not name them, but they share trust paths and could expose lateral movement if compromised.

Step 4. Check relationships

The relationship dimension expands beyond the four identities into the systems and people they touch.

  • Communication graph. dlin’s mailbox in the relevant window shows a recent email with an unfamiliar attachment from an external sender. The sender is a candidate for the originating phishing source.
  • Other recipients. The same email pattern landed in three other finance team mailboxes. Two of them opened the attachment. Two more potential compromise candidates.
  • Downstream access. The cloud finance platform that dlin can access is connected to a vendor payment workflow. If the compromise extends to the cloud identity, payment systems are in scope for the next phase.

The Subject picture now consolidates: 3 primary entities (dlin, the laptop, the cloud identity) + 3 secondary entities (the service account, two other finance users who opened the email) + 1 downstream system (the cloud finance platform feeding the vendor payment workflow), plus the external originating sender tracked as the originating threat. That is the map Scope inherits.

Step 5. Hand off to Scope

Subject ends with a structured, defended view of every identity connected to the alert. The handoff to Scope answers:

  • Primary entities. dlin (user), laptop-finance-09 (host), dlin’s cloud identity (SSO).
  • Secondary entities. The service account dlin’s laptop authenticates against. Two other finance users who opened the same email. The originating sender.
  • High-value paths. The cloud finance platform → vendor payment workflow. Lateral movement here would have regulatory and financial impact.
  • Insider risk indicators. None. The pattern looks like external phishing, not insider activity. No anti-forensic signs.
What the Subject deliverable looks like in practice

Subject’s output is a short, structured document the next phase can act on without redoing the work:

  • Map of every identity in scope (primary + secondary + downstream).
  • Per-identity assessment across the four dimensions.
  • List of relationships that expand the investigation beyond the alert.
  • Recommended scope parameters: time window (24 hours before the alert), entities (the list above), depth of investigation (full for primary identities, baseline-comparison for secondary).

Scope picks up here and turns this map into the formal boundary of the investigation.

Step 1 of 5

A second scenario: the developer workstation

The first walkthrough showed Subject expanding outward from a single named user. Real triage often goes the other way: a noisy alert hits a high-skill identity whose normal behavior looks suspicious. Subject’s job there is to contain the assessment, not expand it.

Working example

Reading Subject when the named user is a developer

An EDR alert fires on a macOS laptop. A signed helper binary (Cursor Helper, part of the Cursor AI-assisted IDE) spawned a child process that disabled its sandbox and made encrypted outbound network calls. The pattern technically matches the Empire post-exploitation toolkit. The named user is a frontend engineer. The host is corporate-managed.

Step 1. Classify the device

Subject begins with what the asset is, not what the alert says. Subject does not have to redo Alert’s parsing, but it does have to attach the activity to a device profile.

  • Hardware and OS. Corporate macOS laptop, recent build, FileVault on.
  • Management posture. MDM-enrolled, JAMF profile verified, EDR (CrowdStrike) agent reporting normally, Gatekeeper enforced.
  • Classification. Developer workstation. This matters because the expected behavior is different from a standard productivity laptop.
  • Recent change history. No new admin agents, no recent OS reinstall, no profile drift in the last 90 days.

The device classification reframes the alert. On a finance laptop, the activity is suspicious-by-default. On a developer workstation, scripting and child-process activity are part of the role. The investigation does not stop here; the bar shifts.

Step 2. Read the user identity

The four-dimensions analysis on the named user, focused on what the role allows.

  • Authentication. SSO login at 09:14, expected location, expected device, FIDO2 second factor. No anomalies.
  • Authorization. Frontend engineer, developer group, no admin rights, no production access. Some elevated capabilities granted to specific applications via macOS TCC, all approved.
  • Behavior. Spawning helper processes from IDEs is part of this user’s daily pattern. Encrypted outbound traffic from an IDE is normal (LSP backends, AI inference, package installs).
  • Relationships. Normal communication graph for the team. No new SSO links, no unexpected resource access this week.

All four dimensions agree on legitimate. That is a real signal, not the absence of one. Subject records it explicitly so Scope inherits a defensible verdict, not a shrug.

Step 3. Evaluate the application

The detection fired on a process. Subject has to assess the process’s identity as carefully as the user’s.

  • Binary identity. Cursor Helper (Plugin), signed by Anysphere, notarized by Apple. Hash matches the public release.
  • Installation path. /Applications/Cursor.app/Contents/Frameworks/…, the expected location for a vendor-installed Electron application.
  • Runtime posture. Electron framework with sandbox disabled at runtime, which is the documented operating mode for the IDE’s plugin host. Not an attacker turning off protections; a vendor design choice.
  • Behavioral baseline. The same binary has been running on this laptop and on dozens of peers on the same team for months. No prior alerts on the binary itself.

What an Empire-style attacker would look different on: the binary would either be unsigned or signed by a key that does not match the vendor; the install path would be in a user-writable directory; the behavioral baseline would show first appearance within the last few days. None of those match.

Step 4. Check access scope

Even when the user, device, and application all read legitimate, Subject has to record what the activity actually had access to. Scope inherits this directly.

  • Privilege level. Standard user context, no sudo or root elevation, no privilege escalation events in the timeline.
  • Sandbox status. Explicitly disabled, but only for the plugin helper as documented. Other Cursor processes still sandboxed.
  • Paths touched. User home, /tmp, Cursor’s application support directory. No system paths, no other users’ homes, no credential stores.
  • Network destinations. Cursor’s vendor endpoints plus a small number of resolved code-completion model hosts. All previously seen, all signed certs, no DNS anomalies.

Subject’s access-scope verdict: bounded to the user’s own profile and to expected vendor infrastructure. That bounds Scope’s environmental footprint to roughly the same surface.

Step 5. Decide the operational risk

The four-dimensions and access-scope checks all return clean. Operational function is the final filter: even when nothing is wrong, the value of the asset informs how much weight Subject gives to remaining ambiguity.

  • Business function. Daily development of customer-facing software. Code commits multiple times per day.
  • Data sensitivity. Source code, dev secrets, possibly customer-data-adjacent test fixtures. High.
  • Lateral movement risk. Moderate. SSH keys to build environments. Persistent terminal sessions to CI runners.
  • Persistence potential. High. The machine is on every day, with predictable patterns an attacker could blend into.
Subject's verdict and the Scope handoff

Subject’s conclusion: false-positive with explicit caveats. The activity matches a known Empire-style pattern in the detection logic but is consistent with the documented operating mode of a signed, trusted, role-appropriate application running on a high-trust developer workstation.

What Subject hands to Scope:

  • Primary entities: the developer, the laptop, the Cursor Helper binary. All read clean across four dimensions.
  • Confidence: high on the binary identity and user behavior. Medium on the runtime posture, because sandbox-disabled behavior is hard to baseline.
  • Recommended scope: narrow. 24-hour time window. Just this host and the user. No peer-group expansion required.
  • Open question for Uncover: are there detection rules that should be tuned for Cursor’s plugin-host pattern so this stops firing repeatedly? Yes, but that is improvement work, not investigation work.
  • Insider posture: none observed. Normal behavior for this role.

This handoff is what a “false-positive” looks like when written properly. Not “dismissed.” Documented.

Step 1 of 5

Next up

Subject chapter quiz

Five questions on the four dimensions, entity types, behavioral framework, and insider analysis.

Take the chapter quiz