Alert working example

Working example

Case A · The finance-team phishing alert

A SIEM rule fires at 09:11. An Outlook process on a finance laptop spawned cmd.exe, which spawned PowerShell with -NoProfile -WindowStyle Hidden -EncodedCommand. The destination of the resulting network call is a domain registered four days ago. The named user is dlin, a finance analyst.

Step 1. Read the detection

Before anything else, the analyst reads what fired and why. The rule’s logic chains three behavioral signals: Outlook → cmd.exe → PowerShell (suspicious parent-child), -EncodedCommand with -NoProfile (a near-universal indicator of automated execution), and outbound traffic to a low-reputation domain within 30 seconds. The rule is a behavioral-analytics rule, not a signature. False-positive rate is documented as roughly 1 in 200 fires; the rule has high precision when the chain is complete.

Three signals in a single chain is far more weight than any one of them would carry alone. The detection mechanism’s confidence is already moderately high before the analyst opens the alert payload.

Step 2. Validate the signal

The four-dimensional validation:

  • Technical. The parent-child chain matches a textbook initial-access pattern used by PowerShell Empire An open-source post-exploitation framework that uses PowerShell agents on Windows and Python agents on Linux. Empire was widely used by red teams and continues to influence threat-actor tradecraft. Many real-world intrusions still show Empire-style command-line and network patterns. and similar PowerShell-based agents.
  • Environmental. Finance analysts on this laptop class do not normally invoke PowerShell. The host’s 30-day baseline confirms it.
  • Intelligence. The destination domain is registered four days ago, hosted on a bulletproof provider, low prevalence, and resolves to an IP block previously seen in commodity-phishing reporting.
  • Business. dlin has read access to the cloud finance reporting platform via federated SSO. The laptop is the trust pivot for a regulated-data system.

All four dimensions agree the signal is worth a deeper investigation. None of them is decisive on its own; the combination is.

Step 3. Parse the metadata

Structured extraction from the alert payload:

  • Command line. powershell.exe -NoP -W Hidden -EncodedCommand <base64>. The base64 decodes to a small downloader that reaches the suspicious domain.
  • Process tree. outlook.exe (PID 5212) → cmd.exe (PID 8104) → powershell.exe (PID 8128). Parent timestamp aligns with an email open event.
  • Network. 09:11:43 UTC outbound HTTPS to invoice-records[.]com/upd. Self-signed TLS cert. Approximately 18 KB transferred.
  • File system. %TEMP%\upd.ps1 written at 09:11:44 UTC. SHA-256 captured.
  • Identity. Active session: dlin@corp. Logon type 2 (interactive). MFA satisfied at 08:48.

Step 4. Frame the hypothesis

Alert finishes by stating the hypothesis the next phases will test. Working hypothesis: an attacker delivered a phishing email containing a macro-bearing document; dlin opened the attachment; the macro launched the cmd → PowerShell chain; the resulting agent established outbound C2 to a recently-registered staging domain. The case looks like a textbook Empire-style initial access.

The phrasing is deliberate. The analyst is not declaring an incident. They are recording an actionable hypothesis with explicit evidence and known gaps, so Subject knows where to expand the investigation.

Step 5. Hand to Subject

Subject inherits a structured starting point:

  • Named entities. dlin (user), laptop-finance-09 (host), invoice-records[.]com (external domain), the originating sender (TBD via mail log pull).
  • Open questions. Who else got the email? Did anyone else execute the macro? What did the federated cloud session do, if anything?
  • Confidence. Moderately high on the parent-child chain (telemetry is complete); low on attribution (no actor signature, only TTP-level match to commodity Empire-style tooling).
Step 1 of 5

Case B · The developer-workstation Empire-pattern alert

The same Alert methodology applied to a very different signal. A second alert fires later the same day, on a different team’s laptop, and the parent-child chain looks superficially similar, but the host, the user, and the binary all tell a different story.

Working example

Case B · The Cursor IDE Empire-pattern alert

A behavioral-EDR alert fires at 14:03 on a macOS developer workstation. A signed helper binary (Cursor Helper, part of the Cursor AI-assisted IDE) spawned a child process that disabled its sandbox and made encrypted outbound network calls. The detection mapped the pattern to known Empire-style post-exploitation behavior. The named user is a frontend engineer.

Step 1. Read the detection

The rule that fired is a cross-platform behavioral-analytics rule looking for “process disables its sandbox, spawns helper, opens encrypted outbound socket.” The rule was originally written from a Windows engagement against an EmPyre agent (the Python-3 fork of PowerShell Empire, since merged back into the Empire framework, both names refer to the same lineage) and was later ported to macOS. Documented false-positive rate on macOS developer hosts: ~6× higher than on managed productivity laptops. That is a known calibration issue, captured in the rule’s metadata.

The rule’s confidence on this host class is therefore lower from the start. The analyst notes that explicitly before opening the payload.

Step 2. Validate the signal

The four-dimensional validation reads differently from Case A:

  • Technical. The pattern (disabled sandbox, encrypted outbound from a helper process) does overlap with Empire’s macOS agent behavior at the surface level. But the binary itself is signed and notarized by the vendor, and the install path is a system-managed application directory, not a user-writable staging area.
  • Environmental. The host is a managed developer workstation. Helper-process activity from IDEs is part of the daily baseline. The user’s peers show the same pattern on the same vendor binary.
  • Intelligence. No prior incidents involving this binary or vendor. The TLS destination resolves to the vendor’s documented inference infrastructure with a valid cert.
  • Business. Source code exposure exists, but the binary’s runtime behavior is consistent with the vendor’s documented operating mode.

Three of four dimensions point away from compromise. The technical dimension has a surface match but is contradicted by deeper context. The methodology calls for continuing the investigation, not closing the alert, the hypothesis is now “likely false-positive, confirm via Subject and Uncover.”

Step 3. Parse the metadata

  • Command line. /Applications/Cursor.app/Contents/Frameworks/Cursor Helper (Plugin).app/Contents/MacOS/Cursor Helper (Plugin) —type=utility …. Vendor-standard.
  • Process tree. launchd → Cursor → Cursor Helper → Cursor Helper (Plugin). All four binaries signed by Anysphere, notarized by Apple.
  • Network. Multiple TLS connections to *.cursor.sh and to a small set of LLM-inference hosts, all with valid certificates, all resolving via standard public DNS.
  • File system. Reads in the user’s home and the Cursor application support directory. No writes outside expected paths.
  • Identity. Active session: a frontend engineer. SSO login at 09:14 with FIDO2 second factor.

Step 4. Frame the hypothesis

Working hypothesis: the detection logic correctly recognized a process-behavior pattern that overlaps with Empire-style macOS post-exploitation tradecraft. The binary, signature chain, install path, and behavioral baseline all suggest this is the documented operating mode of a vendor-supplied IDE plugin host, not adversary tooling. Subject and Uncover will either confirm the false-positive read or surface signals that override it.

The hypothesis is deliberately not “definitely not malicious.” The Alert phase records what the evidence permits, not what the analyst feels.

Step 5. Hand to Subject

Subject inherits a different starting point from Case A, narrower, but with explicit caveats:

  • Named entities. The frontend engineer (user), the macOS laptop (host), Cursor Helper (Plugin) (binary), the vendor’s inference endpoints (external).
  • Open questions. Does the binary’s behavior match peer-group baselines? Are there detection-tuning opportunities for this host class?
  • Confidence. Moderate-high on the binary identity and signature chain; medium on runtime posture (sandbox-disabled behavior is intrinsically harder to baseline).
Step 1 of 5

Next up

Transition to Subject

The Alert chapter's segue into Subject. What Alert hands forward and what Subject picks up from these two cases.

Continue to the transition