Leveraging threat intelligence

Five ways to use intel


๐ŸŽฏ Indicator matching

Indicator matching compares local telemetry (DNS queries, process Hashes Cryptographic functions that generate fixed-size values representing digital data, used for file integrity verification and malware identification. , outbound connections) against known IOCs from Threat An actor (or capability) with intent and means to cause harm. A vulnerability is what they exploit; risk is the product of threat, vulnerability, and impact. -intel sources. Sources span commercial threat feeds, government advisories, ISACs, open-source databases, and internal reports.

๐ŸŒ Network indicators

IP addresses, URLs, and domain names associated with malicious infrastructure (C2 servers, phishing sites, malware staging).

๐Ÿ“„ File indicators

Cryptographic hashes (MD5/SHA256) identifying known malware payloads or tools.

๐Ÿ“ง Email indicators

Suspicious senders, subject patterns, attachment signatures from known campaigns.

๐Ÿงฌ Behavioral indicators

Specific system modifications: registry changes, scheduled tasks, mutex creation, or other distinctive patterns observed in known malware.

Indicator matching process

01

Ingestion

Collect and normalize indicators from diverse sources into structured formats optimized for automated matching.

02

Validation

Evaluate indicator quality, relevance, and reliability before operational deployment.

03

Integration

Deploy matching capabilities across SIEM, EDR, NDR, and proxy systems.

04

Retroactive hunting

Analyze historical data to uncover previously undetected compromises using newly acquired indicators.

05

Real-time monitoring

Implement alerting mechanisms for immediate detection of indicator matches in ongoing activity.

06

Feedback

Systematically document true and false-positives to continuously refine intelligence quality.


๐Ÿ‘ค Adversary attribution

Adversary attribution is the methodical process of connecting observed Attack Patterns Common techniques or behaviors used by attackers that help in recognizing and defending against threats. to specific threat actors, groups, or campaigns. The analysis integrates behavioral signatures, Infrastructure The underlying systems, networks, and architecture that support an organization's operations. reuse, Malware Software whose author intends harm: ransomware, trojans, worms, viruses, spyware, wipers, rootkits, RATs. The B.A.D. glossary catalogs the families in detail. family characteristics, and documented TTPs from authoritative threat-intel sources such as Mandiant, CrowdStrike, Recorded Future Threat-intelligence platform combining machine learning with human analysis across open, dark, and technical sources. , and the MITRE ATT&CK framework.

Technical evidence

Malware code signatures, infrastructure overlap, compilation artifacts. Foundational linkages to known threat actors.

Behavioral patterns

Victim selection criteria, operational timeframes, distinctive tactical approaches. Correlation points beyond the binary.

Strategic context

Geopolitical developments, threat-actor motivations, industry targeting preferences. The โ€œwhyโ€ that anchors attribution.

Confidence levels

LOW

Isolated technical indicators with significant potential for coincidental overlap. May suggest investigative directions but requires substantial corroborating evidence before operational use.

MEDIUM

Multiple correlated indicators across diverse categories (tooling, infrastructure, TTPs) that collectively point toward a specific threat actor or group.

HIGH

Comprehensive evidence portfolio including unique identifiers, distinctive operational patterns, and multiple reinforcing data points indicating a specific actor with minimal alternative explanations.

Attribution challenges to recognize
  • False flag operations. Sophisticated adversaries deliberately impersonate other threat actors to obscure responsibility and complicate attribution.
  • Shared tooling. Widespread use of common malware families or publicly available frameworks creates attribution ambiguity.
  • Evolution of TTPs. Adversaries methodically evolve their methodologies over time to evade established recognition patterns.
  • Confirmation bias. Analytical tendency to disproportionately value evidence supporting initial attribution hypotheses while discounting contradictory indicators.

Attribution at low or medium confidence should be labeled as such in the case file. Promoting low-confidence attribution to operational decisions is the most common attribution failure mode.


๐Ÿ” Threat hunting with TTPs

Proactive Threat Hunting Proactive analysis that starts from a hypothesis ("if an attacker were here, what would I expect to see?") and searches telemetry for evidence. Distinct from alert triage, which reacts to detections; hunting goes looking for what the detections missed. based on known Tactics, Techniques, and Procedures uncovers sophisticated threats that have evaded traditional detection. Unlike IOC matching, which relies on known Artifacts Digital evidence or traces left behind by system activity or security incidents, used in forensic analysis and incident investigation. , TTP-based hunting focuses on adversary behavior patterns regardless of the particular tools employed.

By transforming TTPs into actionable hunt hypotheses (for example, โ€œIdentify PowerShell A command-line shell and scripting language built on the .NET framework, commonly used for system administration and potentially for malicious purposes. Execution The attacker successfully runs malicious code on a system, typically using interpreters, scripts, payloads, or legitimate tools. originating from non-administrative accountsโ€ or โ€œDetect scheduled tasks created within temporary directoriesโ€) security teams can methodically query logs, EDR telemetry, or SIEM platforms for evidence of stealthy or emerging attacks.

01

Hypothesis formation

Develop specific, testable theories based on threat intelligence about adversary TTPs relevant to your environment.

02

Query development

Create data queries or analytics that would identify evidence of the targeted behavior across relevant data sources.

03

Hunt execution

Systematically search through current and historical data for patterns matching the hypothesis.

04

Finding analysis

Investigate initial results to eliminate false-positives and validate genuine security concerns.

05

Process improvement

Document findings, refine queries, and implement persistent detection for discovered techniques.


โš™๏ธ Automated threat enrichment

Automated enrichment dynamically incorporates threat intelligence into security infrastructure, delivering real-time context for suspicious events. The aim is to eliminate manual intelligence lookups and expedite incident triage by providing analysts with comprehensive, contextually enhanced views of potential threats.

Enrichment data types

  • Reputation scores. Trust ratings for IPs, domains, files, URLs.
  • Threat-actor attribution. Adversary groups associated with observed indicators.
  • Malware classification. Family, variant, and capability profiles.
  • Contextual metadata. WHOIS, infrastructure details, certificate validation.
  • Historical intelligence. Previous indicator sightings and campaign activities.

Common enrichment services

  • VirusTotal. Multi-engine file and URL reputation.
  • Shodan / Censys. Internet-exposed services and vulnerability intel.
  • GreyNoise. Background-noise filtering and benign-activity identification.
  • Microsoft Defender Threat Intelligence (MDTI, formerly RiskIQ / PassiveTotal). Domain and IP infrastructure mapping.
  • AlienVault OTX. Crowd-sourced threat indicators and community intel.

Implementation approaches

  • Native tool integration. Purpose-built connectors within SIEM/SOAR.
  • API orchestration. Custom scripts or middleware for intelligence aggregation.
  • Intelligence platforms. Dedicated TIP solutions with advanced enrichment.
  • Inline enrichment. Network tools that tag traffic with real-time threat context.

๐Ÿ•ธ๏ธ Dark web and underground monitoring

Dark web surveillance can detect exposed credentials, breach preparations, targeted attack discussions, Exploit A technique or piece of code that turns a vulnerability into actual capability: remote code execution, privilege escalation, authentication bypass. A vulnerability without an exploit is theoretical; a vulnerability with one is operational. development, and brand impersonation. Specialized vendors and dedicated threat-intel teams monitor these spaces using automated scraping plus human intelligence. The output is valuable as an early-warning system, especially when integrated with internal threat modeling.

Sources

  • Criminal forums. Boards for technique sharing, service sales, recruiting.
  • Marketplaces. Underground markets for malware, exploits, credentials, stolen data.
  • Leak sites. Where ransomware groups and hacktivists publish stolen data.
  • Chat platforms. Encrypted messaging for private actor communications.
  • Paste sites. Anonymous data dumps and credential leaks.

Intelligence value

  • Pre-attack indicators before traditional detection fires.
  • Visibility into adversary capabilities and intentions.
  • Advance warning of emerging attack techniques.
  • Insight into targeting priorities of threat actors.

Collection challenges

  • Access barriers to closed communities.
  • Language and cultural context requirements.
  • Signal-to-noise ratio in unstructured data.
  • Legal and ethical considerations in collection.

Use cases

  • Credential exposure monitoring and forced resets.
  • Early vulnerability patching based on exploit chatter.
  • Brand protection and impersonation detection.
  • Insider threat identification via data leakage.

Threat intelligence tiers

Intelligence operates across three tiers, each serving distinct purposes. Knowing which tier to use at which moment is part of the methodology.

Tactical
Operational
Strategic
Time horizon
Hours to days
Weeks to months
Months to years
Primary users
SOC analysts, IR responders
Threat hunters, detection eng.
Leadership, architects
Format
Machine-readable indicators
Behavioral analytics, TTP docs
Reports, briefings, risk assessments
Measurement
Detection effectiveness, FP rate
Campaign coverage, TTP detection
Risk reduction, strategic alignment
Update frequency
Continuous, automated
Regular, semi-automated
Periodic, manually curated
Example sources
OSINT feeds, TIP, vendor alerts
Campaign analysis, malware research
Industry reports, geopolitical analysis

Validation and confidence

Not all intel is equal. Before acting on a piece of intelligence, validate it across four dimensions.

๐Ÿ“ก Source reliability

Where did this come from? Premium sources include government agencies, established vendors, vetted ISACs. Higher-risk sources include crowdsourced platforms lacking verification. Triangulation across multiple providers strengthens confidence.

โฑ๏ธ Recency

When was this observed? Indicators age out fast. Systematically retire obsolete indicators (formerly malicious infrastructure repurposed as benign cloud edges, for instance).

๐ŸŽฏ Specificity

Is the intel about the exact thing being investigated, or a loose analogy? โ€œThis domain is associated with phishingโ€ is less actionable than โ€œThis specific subdomain pattern is used by this specific campaign.โ€

๐Ÿ”— Corroboration

Does any other source say the same thing? Two independent sources reporting the same indicator is a stronger signal than one source repeating itself.

Confidence ladder

UNVERIFIED

Raw intelligence requiring validation before operational use.

LOW CONFIDENCE

Limited corroboration or from less reliable sources.

MEDIUM CONFIDENCE

Confirmed by at least one trusted source with supporting context.

HIGH CONFIDENCE

Verified by multiple trusted sources with direct evidence.

Next up

MITRE ATT&CK

The shared vocabulary for adversary behavior. Tactics, techniques, procedures.

Read MITRE ATT&CK