Insider analysis

Risk vs. threat

The vocabulary matters. The two words mean different things, and the response differs accordingly.

🟡 Insider risk

The possibility that an insider’s actions (intentional or not) cause harm. Includes mistakes, policy violations, unsafe habits, and well-meaning workarounds. Most insider risk is not malicious. Most insider risk is also not random; it follows the same patterns of stress, change, and incentive that any human behavior does.

🔴 Insider threat

An insider acting with intent to harm the organization. Theft of intellectual property, sabotage, fraud, exfiltration to an external party. Far rarer than risk, but with disproportionate impact when it happens. The methodology for detecting threat overlaps with risk monitoring but requires escalation patterns that the rest of the SOC may not be used to.

CASE STUDY

Edward Snowden: when triage misses intent

Snowden’s actions at the NSA were not negligent errors. They were a deliberate, sophisticated insider threat. He had authorized access, knew the systems, and exfiltrated classified material over an extended period. The activity itself looked legitimate on the surface, which is exactly why traditional anomaly-driven triage missed it. Volume, file access, system queries: all within an admin profile’s expected envelope.

The lesson for Subject: entity-based behavioral analytics that compared Snowden’s activity against his own historical profile, against peers in similar roles, and against the sensitivity of the data accessed could have produced earlier signals. Detection rules built around “is this normal in the environment” would not have. The methodology treats role + data sensitivity as a multiplier on every anomaly score; that multiplier is what surfaces sophisticated insiders.

A second case: the marketing team and the DLP workaround

A marketing team unintentionally bypassed Data Loss Prevention controls by converting confidential documents into image files and sharing them externally with a client during a collaboration. The workaround inadvertently exposed records for 3,200 customers.

Investigation found no malicious intent. The behavior stemmed from frustration with secure file-sharing tools that added an average of 27 minutes to each transaction. The team was solving a usability problem, not stealing data.

The remediation focused on the underlying friction: the approved sharing workflow was redesigned to take 4 minutes per exchange (down from 27), and a tailored process was added for marketing’s specific collaboration needs. Compliance improved 92% within 30 days.

This is insider risk, not threat. The investigation pivoted from forensics to process improvement. Treating it as threat would have produced an enforcement action that did not actually fix the cause.


The Insider Threat Matrix

The Insider Threat Matrix A community-maintained framework at insiderthreatmatrix.org. Catalogs insider techniques across phases (motive, means, preparation, infringement, anti-forensics) similar to how MITRE ATT&CK catalogs external adversary techniques. The matrix provides shared vocabulary and detection guidance. at insiderthreatmatrix.org organizes insider tradecraft into five phases. Used as a lens, it gives analysts a vocabulary for describing what they are seeing.

01

🎯 Motivation

Why an insider might act. Financial pressure, ideology, grievance, ego, coercion, or compromise by an external party. Rarely visible in telemetry; usually visible in human signals (HR concerns, expressed dissatisfaction, sudden lifestyle changes).

02

🛠️ Means

What access and capability the insider already has. The matrix catalogs the techniques an insider can use without ever needing to “hack in”, they are already inside.

03

🧭 Preparation

The steps an insider takes before the harmful action: reconnaissance of internal systems, acquisition of additional credentials, learning what the SOC watches. These are the early signals if anyone is looking.

04

⚠️ Infringement

The actual harmful action. Theft of intellectual property, sabotage, fraud, exfiltration. The visible event most investigations start from.

05

🧽 Anti-forensics

What the insider does to make the action hard to trace. Log clearing, timestomping, using accounts they should not (or accounts that “belong” to someone else). Cleanup is itself a signal.

Inside each phase

Motivations the matrix catalogs
  • Financial gain. Data theft or abuse of access for personal profit, often tied to fraud or IP theft. Triage correlates access to high-value assets with anomalous transfers or DLP violations, especially off-hours or to personal destinations.
  • Coercion or blackmail. External pressure produces abrupt, high-risk actions inconsistent with User An individual who interacts with a system, network, or application. history. Sudden privilege escalations or sensitive access outside normal duties, often with concealment.
  • Revenge or resentment. Triggered by demotions, conflicts, perceived mistreatment. Behavior changes after HR actions: unusual access, deletions, misconfigurations on critical systems.
  • Curiosity or challenge. Non-malicious boundary testing by privileged users. Lateral Movement Adversary traversal from the initial-access host to other hosts inside the environment. Each hop expands the blast radius and adds new entities for Subject analysis. Often piggybacks on legitimate authentication, which is what makes it hard to detect. or unauthorized exploration; weigh against training environments.
  • Ideological beliefs. Political or personal convictions producing data leaks or sabotage. Anonymization tools, external Data Staging Sensitive data collected and stored locally in chunks before exfiltration, avoiding large, sudden transfers that DLP would flag. , access aligned with the activism target.
  • Negligence or apathy. Careless behavior creating risk through weak hygiene. Repeated policy violations or mishandled Sensitive Data Information that is confidential, proprietary, or regulated, such as personal data, financial information, or intellectual property. without clear malicious intent. This is risk territory, not threat.
Means the matrix evaluates
  • Technical access. Beyond role-based access: residual permissions from previous teams, inherited group rights, lateral-movement paths via shared service accounts. Access path mapping surfaces dormant access that role review misses.
  • Environmental awareness. Insiders can Exploit A technique or piece of code that turns a vulnerability into actual capability: remote code execution, privilege escalation, authentication bypass. A vulnerability without an exploit is theoretical; a vulnerability with one is operational. knowledge of internal processes, weak controls, or architectural blind spots. Tenure, cross-department experience, and access to internal documentation all increase environmental awareness.
  • Privileged access. Admins and engineers bypass routine controls by design. Triage applies tighter scrutiny: unusual admin tool use, escalation paths, sensitive system access outside the job pattern.
  • Physical access. Hardware tampering, offline Exfiltration The unauthorized transfer of data from a system or network, often as part of a data breach or espionage operation. via USB, planted rogue devices. Badge logs, USB Telemetry Collection and transmission of security-relevant data from remote sources for monitoring and analysis. , and secure-zone entry data belong in insider triage.
Preparation activity to watch for
  • Technical prep. Penetration tools, encryption utilities, scripting platforms not normally required for the role. Often deployed in sandboxed or temporary directories. Monitor for unusual binary executions and failed installs from non-admin paths.
  • Data gathering. Testing access to datasets outside job scope. Small sample downloads or probing archival stores that escalate in size and frequency. Sudden access to Legacy Systems Outdated technology or systems that remain in use, often with security vulnerabilities. or large non-recent datasets is a precursor.
  • Temporal indicators. Preparation activity shifts to off-hours, weekends, holidays. These windows often coincide with reduced staffing. Alerting on high-sensitivity access during low-visibility hours surfaces this.
  • Circumventing testing. Deliberate threshold-testing: failed logins, policy violations, copying data across classifications. Individual events look benign; the pattern over days or weeks is the signal.
Infringement: how harmful action actually happens
  • Execution The attacker successfully runs malicious code on a system, typically using interpreters, scripts, payloads, or legitimate tools. patterns. Rather than one large action, insiders often spread access or extraction over days or weeks, staying below alert thresholds. Repeated small-scale transfers from sensitive systems are worth flagging even when each is under threshold.
  • Technical execution. Test queries to datasets outside scope, bulk downloads that escalate over time. Treat sudden access to legacy systems or large non-recent datasets as a precursor to broader exfiltration.
  • Compromised Credentials Usernames and passwords exposed, stolen, or leaked, enabling attackers to gain unauthorized access to systems or services. . “Borrowing” accounts from coworkers on vacation, leave, or recently departed. Usage of dormant or newly reactivated identities should be correlated against the original owner’s expected behavior.
  • Supply Chain The network of organizations, people, activities, and resources involved in creating and delivering products, presenting multiple attack vectors for compromising systems or software during development and distribution. manipulation. Abusing access to development pipelines or procurement to insert malicious code or alter dependencies. Last-minute code changes by high-privilege users tied to change-control tickets and peer approvals.
Anti-forensics: how insiders try to hide
  • Technical concealment. Deleting or truncating event logs, scrubbing shell history, tampering with audit trails, overwriting timestamps. Encryption tools or shred scripts deployed shortly after activity. Integrity validation on key forensic sources is the counter.
  • Behavioral concealment. Working under assumed identities, manipulating ticketing systems, crafting business cases that look legitimate but cover malicious access. Authenticate business justifications against peer approvals and organizational objectives.
  • Misdirection. Triggering low-impact security events to divert attention from the primary objective. Triage should ask whether simultaneous incidents are causally linked or part of a broader misdirection campaign.
  • Counter- Intelligence Information gathered and analyzed to understand and predict potential security threats. . Monitoring SIEM alerts, Ticket A record or request for assistance or support, often used in help desks or incident response. queues, or detection content to learn how the SOC operates. Compartmentalize investigative workflows and restrict access to Detection Logic The rule, model, or heuristic that decides whether a given input fires an alert. The logic that produced the alert matters as much as the alert it produced; two engines can name the same alert for very different reasons. .

What insider analysis catches

📦 Slow-burn data collection

An insider gradually building a personal cache of sensitive documents over months. Each individual access looks legitimate; the trajectory is the signal.

🚪 Access creep before a known event

Requesting or accumulating permissions in the weeks before a known departure, performance review, or organizational change. Correlation with HR signals is what surfaces this.

🪪 Shared-account misuse

A shared account being used by an unexpected person. Hard to detect because the audit trail is intentionally ambiguous. Behavioral baselining of the shared account against its expected users is the strongest approach.

🔇 Anti-forensic cleanup

Log clearing, history wiping, USB usage from an account that has never used removable media. Each is a signal in its own right; the combination is almost always intentional.

Behavioral vs technical indicators

The strongest insider triage correlates two streams that often live in separate teams: human signals (HR, manager observations) and technical signals (logs, telemetry). Either stream alone produces noise. Together they produce signal.

Behavioral indicators

  • Expressed dissatisfaction. Verbal resentment toward management, policy changes, or peers. Worth investigating in context of recent disciplinary or organizational changes.
  • Sudden withdrawal. Shift toward isolation, minimal communication, refusal of collaborative work. Worth correlating against technical anomalies in the same window.
  • Unusual curiosity. Repeated questioning about systems, data classification, or infrastructure outside the user’s role. Cross-reference with access logs.
  • Schedule changes. Voluntary shifts to nights, early mornings, weekends without business justification, especially when paired with sensitive system access.
  • Resistance to oversight. Arguing against monitoring tools, complaining about controls, or circumventing security steps. Concerning if paired with technical attempts to disable controls.
  • Performance degradation. A decline after being passed over for a role or project. Risk-taking or careless access behavior shortly after is worth noting in triage urgency.
  • Personal stressors. Major life events (divorce, debt, medical) can lead to risky decisions. Do not assume intent, but note the compounding pressure.

Technical indicators

  • Unauthorized data transfer tools. FTP clients, external sync utilities, USB mass storage on accounts that have not used them historically.
  • Low-volume data exports just below alert thresholds. Extracting moderate amounts over extended periods is a known evasion tactic. Look for staying-below-threshold patterns.
  • Privilege bypass attempts. Multiple failed access attempts to restricted systems, or self-approval of elevated permissions.
  • Compression and archiving outside normal workflows. Zip, tar, or rar files containing sensitive documents in personal folders or shared drives.
  • Disabling endpoint protections. Turning off antivirus, DLP agents, or logging services. Treat as high-priority and possibly isolate for forensic review.
  • Accessing dormant accounts or unrelated systems. Reactivating dormant credentials, touching systems unrelated to current responsibilities.
  • Use of anonymizing or encrypted services. Remote desktop sessions, encrypted messengers, unsanctioned cloud storage. Cross-reference with network and endpoint telemetry.

Mitigation strategies

Strong insider programs are built from five complementary practices. None of them alone is sufficient; together they create an environment where insider abuse is hard, visible, and expensive.

01

Least privilege enforcement

Users have access to data and systems required for current job, with automated deprovisioning of stale roles or inherited access. Triage flags access attempts outside current scope. Role-based reviews integrate into quarterly compliance and into alert suppression logic.

02

Segregation of duties

No individual can create, approve, and execute sensitive actions on their own. Deployment, access grants, and audit log review live with separate roles. Triage treats violations of this model (especially with emergency access or after hours) as high-risk.

03

Technical monitoring

Behavioral analytics, privileged session monitoring, endpoint detection tuned for lateral movement and data aggregation. Quality alerts come from individual baselines, not just global policy violations. Triage prioritizes events that combine anomalous behavior with elevated privilege.

04

Cultural approaches

Employees feel safe reporting concerns through clear, non-retaliatory channels. The security culture surfaces early warnings before they become technical events. Triage considers HR referrals and social signals alongside log data.

05

Insider response protocols

Dedicated playbooks for internal actors: evidence preservation without tipping the subject, coordinated containment, post-event review. Mature programs maintain specialized insider threat cells separate from standard incident response.

Next up

Subject working example

A multi-step walkthrough applying Subject analysis to a real-shaped scenario.

See the working example