How We Detected Ransomware in 8 Minutes

Real-world incident response: Detecting and containing an attack before encryption began

Reading time: 10 minutes

Executive Summary

Client: Mid-size healthcare organization (250 employees)

Attack Type: LockBit 3.0 ransomware

Initial Access: Compromised VPN credentials (no MFA)

Detection Time: 8 minutes from initial execution

Containment Time: 14 minutes total

Outcome: Zero files encrypted. Normal operations resumed within 2 hours.

The Attack Timeline

02:47 AM - Initial Access

Attacker authenticates to VPN using compromised credentials belonging to an IT administrator. The credentials were likely obtained through an infostealer malware infection on the admin's personal device (we found the credentials for sale on a dark web marketplace later).

Red Flag #1: Login from Bulgaria. Admin lives in Ohio. Unfortunately, the VPN vendor's alerting was not enabled for geo-anomalies at the time.

02:53 AM - Reconnaissance

For 6 minutes, the attacker performed reconnaissance:

  • Ran net user /domain to enumerate domain users
  • Ran net group "Domain Admins" /domain
  • Used ping to test connectivity to file servers
  • Checked for backup systems with vssadmin list shadows

02:59 AM - Ransomware Staging

Attacker uploaded ransomware binary to C:\Windows\Temp\svchost.exe (masquerading as legitimate Windows process). The binary was LockBit 3.0, a ransomware-as-a-service variant known for fast encryption.

This is where our detection kicked in.

How We Detected It

At 02:59 AM, our behavioral analytics flagged three simultaneous anomalies:

Detection #1: Suspicious Process Execution

Microsoft Defender for Endpoint (the client's EDR) observed:

  • svchost.exe running from C:\Windows\Temp (legitimate svchost always runs from C:\Windows\System32)
  • Process had no digital signature
  • Process parent was cmd.exe launched via RDP session

Detection #2: Behavioral Indicators

Within seconds of execution, our SIEM correlated multiple behavioral indicators:

  • Attempt to delete Volume Shadow Copies (vssadmin delete shadows /all)
  • Attempt to disable Windows Defender (Set-MpPreference -DisableRealtimeMonitoring $true)
  • Rapid file access pattern (100+ files accessed in 10 seconds)

These behaviors matched MITRE ATT&CK techniques:

  • T1490: Inhibit System Recovery (shadow copy deletion)
  • T1562.001: Impair Defenses (disable AV)
  • T1486: Data Encrypted for Impact (preparation phase)

Detection #3: Threat Intelligence Match

The binary's SHA256 hash matched a known LockBit 3.0 sample in our threat intelligence feeds. This gave us immediate context: we knew exactly what we were dealing with.

⏱️ Timeline So Far

02:59:00 AM: Ransomware execution
02:59:08 AM: Alert triggered (8 seconds)
02:59:15 AM: Analyst notified (15 seconds)

Our Response

At 02:59 AM (8 minutes after initial VPN login), our SOC analyst received a high-priority alert. Here's what happened next:

Minute 1: Triage & Validation

03:00 AM - Analyst reviewed alert, confirmed true positive based on:

  • Behavioral indicators (shadow copy deletion)
  • Threat intel match (known ransomware hash)
  • Abnormal login context (Bulgaria)

Escalated to P1 incident, initiated ransomware playbook.

Minutes 2-5: Immediate Containment

03:01 AM - Analyst executed containment via Defender for Endpoint:

  • Network isolated affected machine (prevented lateral movement)
  • Killed ransomware process via EDR remote response
  • Disabled compromised VPN account in Azure AD
  • Terminated all active sessions for that account

03:03 AM - Contacted client's on-call IT contact via phone (as documented in runbook). Confirmed no business impact yet—attack occurred during off-hours.

03:05 AM - Blocked attacker's source IP at firewall (Belgium hosting provider).

Minutes 6-15: Scope Assessment

03:06 AM - Queried SIEM and EDR to determine scope:

  • How many systems accessed? Only one (the admin's workstation via RDP).
  • Were files encrypted? No. Ransomware was in preparation phase (deleting shadows) when we killed it.
  • Was data exfiltrated? Checked firewall logs: no large outbound transfers. LockBit typically exfiltrates before encryption, but attack was too short.
  • Were backups affected? No. Backups are air-gapped and ransomware never reached backup systems.

03:13 AM - Validated eradication: No ransomware artifacts remained. Searched all endpoints for LockBit IOCs—found nothing.

✅ Containment Complete

Total Time: 14 minutes from initial execution to full containment.
Files Encrypted: Zero.
Systems Impacted: One workstation (isolated, reimaged).

Root Cause Analysis

Post-incident investigation revealed:

How Credentials Were Compromised

  • IT admin used same password for VPN and personal email
  • Personal email was compromised in data breach (found on haveibeenpwned.com)
  • Credentials sold on dark web marketplace for $15
  • VPN had no MFA requirement for admin accounts

Why Detection Worked

  • Behavioral analytics: Caught shadow copy deletion immediately
  • Threat intelligence: Known ransomware hash matched
  • EDR telemetry: Process execution details flagged anomaly
  • Human analyst: Validated alert in under 1 minute, executed playbook

Why Traditional Defenses Failed

  • Antivirus: Attacker disabled it via PowerShell (no prevention)
  • Firewall: Legitimate VPN connection appeared normal
  • Email security: Not applicable (credential theft, not phishing)

Remediation & Hardening

After containment, we worked with the client to prevent recurrence:

Immediate Actions (Week 1)

  • Enforced MFA on all VPN accounts (no exceptions)
  • Rotated all privileged credentials (domain admins, service accounts)
  • Reimaged affected workstation (forensic image preserved)
  • Enabled geo-blocking on VPN (whitelist approved countries)
  • Implemented impossible travel alerts in Azure AD

Long-Term Hardening (Month 1)

  • Deployed privileged access workstations (PAWs) for admin tasks
  • Implemented Just-In-Time admin access (JIT, Azure AD PIM)
  • Disabled RDP for all non-admin users
  • Deployed Defender ASR rules (Attack Surface Reduction):
    • Block credential theft from LSASS
    • Block process creation from PSExec/PsTools
    • Block Office from creating executable content
  • Implemented application whitelisting on critical servers

Lessons Learned

What Worked

  • Behavioral detection caught ransomware before signatures existed
  • EDR deployment on all endpoints enabled rapid response
  • Documented playbook ensured consistent, fast response
  • 24/7 SOC coverage meant 2:59 AM alert was triaged immediately
  • Isolation capability prevented lateral movement

What Could Have Been Better

  • ⚠️ MFA not enforced on VPN (client responsibility, but we should have flagged earlier)
  • ⚠️ No geo-blocking on VPN (should have been configured)
  • ⚠️ Admin used personal email for corporate credentials (training gap)
  • ⚠️ VPN vendor alerts disabled (would have caught Bulgaria login)

By The Numbers

8 min
Detection Time
14 min
Total Containment
0
Files Encrypted
2 hrs
Back to Normal

The Alternative Timeline

What if detection had taken 30 minutes instead of 8?

LockBit 3.0 can encrypt 100,000 files in 4-5 minutes on a modern system. In 30 minutes, the attacker could have:

  • Encrypted the admin's workstation (5 min)
  • Moved laterally to 3-5 additional systems via RDP (10 min)
  • Reached file servers and begun mass encryption (15 min)
  • Potentially encrypted 50-100GB of critical data

Estimated impact:

  • 💰 Ransom demand: $250,000 (typical for healthcare orgs this size)
  • ⏱️ Downtime: 3-7 days (restore from backups)
  • 📋 Regulatory: HIPAA breach notification (potentially PHI encrypted)
  • 💵 Total cost: $500K-1M (ransom, downtime, remediation, legal, PR)

Actual impact with 8-minute detection:

  • 💰 Ransom paid: $0
  • ⏱️ Downtime: 2 hours (one workstation reimaged)
  • 📋 Regulatory: No breach notification required
  • 💵 Total cost: ~$5K (analyst time + workstation reimage)

⚡ Speed Saved $500K+

The difference between 8 minutes and 30 minutes was the difference between a minor incident and a catastrophic breach. That's why MTTD (Mean Time to Detect) matters.

Conclusion

This incident demonstrates why MDR exists. Traditional security tools (firewall, antivirus, VPN) all failed to stop the attack. What stopped it was:

  1. Continuous monitoring of endpoint behavior
  2. Behavioral analytics that caught anomalies
  3. Threat intelligence for instant context
  4. Human expertise to validate and respond
  5. Documented playbooks for consistent execution
  6. 24/7 coverage so 2:59 AM alerts don't wait until 8 AM

The client avoided a $500K+ ransomware incident because we detected and responded in 8 minutes instead of hours or days. That's the value of MDR.

Want this level of protection?

Our MDR service provides 24/7 monitoring, behavioral detection, and rapid response—just like this case study.

Book a Free Consultation