Hypervisor‑Level Qilin Ransomware Containment for a Multinational Manufacturer from August to October 2025
A large, multi‑site manufacturing group in Asia woke up to a nightmare: key VMware ESXi hypervisors and the virtual machines they hosted were encrypted by a Qilin (Agenda) ransomware affiliate. Multiple business‑critical systems went down at once.
The organisation engaged Blackpanda’s digital forensics and incident response (DFIR) team (internal link: Incident Response / DFIR service page) to contain the incident, investigate what really happened, and give executives a clear, defensible understanding of their true data‑theft and business‑impact risk.
At a glance
- Industry: Manufacturing & distribution (multi‑site, global operations)
- Environment: VMware ESXi virtualisation, centralised backups, enterprise EDR, remote‑management tools
- Threat actor: Qilin (Agenda) ransomware affiliate
- Attack type: Double‑extortion ransomware, hypervisor‑level encryption, credential & backup targeting
- Blackpanda services: DFIR, threat‑intel analysis, containment support, recovery advisory, security roadmap
CHALLENGE
A hypervisor‑level ransomware attack with limited visibility
The ransomware operators didn’t just hit individual servers. They:
- Compromised VMware ESXi hypervisors and deployed ransomware at the virtualisation layer, rendering many virtual machines unbootable in a single move.
- Knocked out multiple business‑critical applications at once, creating a multi‑system outage across regions.
- Cut off EDR visibility for many systems when those VMs stopped reporting telemetry after encryption.
The business needed to bring systems back online quickly, but much of the usual forensic evidence was now encrypted, offline or missing.
From a commercial perspective this was a multi‑site outage; from a forensic perspective, it was like having half the cameras go dark right before the robbery.
Advanced tradecraft using familiar tools
The affiliate relied heavily on “living‑off‑the‑land” techniques that blend into normal IT activity:
- Remote administration tools such as PsExec and WMIC, used with highly privileged accounts to execute commands across file servers, database servers and hypervisors.
- Silent installation of remote‑management software (ScreenConnect, commercial RMM agents) for persistence and hands‑on control.
- Obfuscated PowerShell targeting backup platforms to discover, decrypt and reuse stored credentials in backup databases.
- A suspected Bring‑Your‑Own‑Vulnerable‑Driver (BYOVD) approach to tamper with endpoint security at the kernel level and disrupt protections.
Internally, much of this initially looked like routine administration. By the time systems started encrypting, attackers already had deep, privileged access.
Uncertainty around data theft and double extortion
As with most modern ransomware groups, Qilin:
- Operates a dark‑web leak site to pressure victims.
- Is known for double‑extortion: exfiltrate data first, encrypt later, then threaten to leak.
In this case:
- Telemetry showed rclone – a popular exfiltration tool – running on key file servers.
- Limited firewall log retention meant that network visibility prior to the day of discovery was incomplete, so it was hard to prove how much data, if any, had actually left the environment.
- On Qilin’s leak site, the client was listed with only a handful of screenshots, and no large data set or download links, unlike other victims where stolen volume is proudly advertised.
The leadership team urgently needed answers:
- How long had the attacker been inside?
- Which systems and accounts were affected?
- Was sensitive data actually exfiltrated or only targeted?
- How could they safely restore operations without re‑introducing the attacker?
SOLUTION
1. Rapid, business‑aware DFIR engagement
Blackpanda was engaged within hours of discovery and immediately aligned the investigation with the client’s business‑recovery priorities:
- Leveraging existing telemetry. We pulled and preserved evidence from the client’s EDR platform, key servers and available network devices before logs could roll over.
- Targeted system restores. Instead of restoring every encrypted machine for forensics (which would have delayed recovery), only high‑value, high‑signal endpoints were restored from pre‑encryption backups for deep analysis.
- Clear, shared timeline. We built a concise, visual timeline of the incident – from first malicious activity through to encryption and containment – that executives, IT and security could all use as a single reference.
This “evidence‑first, business‑aware” approach allowed forensics and recovery to run in parallel, rather than forcing the business to choose between operational uptime and investigative clarity.
2. Reconstructing the attack using MITRE ATT&CK®
To make the story legible for both technical and non‑technical stakeholders, Blackpanda mapped attacker behaviour against the MITRE ATT&CK framework, covering Initial Access, Execution, Persistence, Defense Evasion, Credential Access, Discovery, Lateral Movement, Command & Control, Exfiltration and Impact.
Key findings included:
- Earliest confirmed malicious activity: abuse of remote‑management tools roughly four days before encryption.
- Privilege escalation & lateral movement using compromised admin accounts to:
- Install remote‑access agents on hypervisors and backup servers.
- Run network scanners to map the environment.
- Push payloads and scripts from central points such as backup servers.
- Persistence via remote tools such as ScreenConnect and an RMM agent, silently deployed across several strategic systems.
- Credential harvesting from backup infrastructure using adapted versions of public Veeam password‑recovery scripts and registry lookups (e.g. salts, database configuration), designed to decrypt stored credentials and weaken recovery options.
- Use of rclone and SOCKS proxy DLLs suggesting data staging and outbound transfers, even though incomplete logs prevented exact volume calculation.
3. Clarifying data‑exfiltration risk
Data theft was the board’s biggest concern. Blackpanda combined:
- Host‑level evidence of rclone execution on two key file servers.
- Network evidence for the day of discovery showing connections from those servers to external infrastructure consistent with exfiltration tooling.
- Dark‑web monitoring of Qilin’s leak site, confirming only a small set of screenshots – with no advertised data volume, file count or download pack, unlike other Qilin victims.
From this, we delivered a nuanced, defensible conclusion:
- There were clear, technically sophisticated attempts to exfiltrate data, but no evidence of large‑scale successful export or public leak comparable to other Qilin cases.
- This gave executives a much clearer starting point for regulatory notifications, customer communications and insurer discussions, without swinging to either extreme of “nothing happened” or “assume the worst”.
4. Board‑ready reporting and a security roadmap
At the end of the engagement, the client received:
- A single, plain‑language narrative of the incident, built from primary evidence and ATT&CK‑mapped analysis – suitable for executive briefings, regulator queries and insurance claims.
- A factual account of which security controls worked (e.g. EDR quarantining at least one Linux ransomware payload and a malicious loader DLL) and where attacker techniques such as BYOVD and hypervisor‑level targeting bypassed traditional defences.
- A prioritised hardening roadmap focused on the exact gaps exploited by the attacker:
- Limited monitoring and logging at the ESXi hypervisor layer.
- Short firewall and VPN log‑retention windows.
- Concentration of power in a small number of privileged accounts.
- Ungoverned or lightly governed remote‑management tools.
- Insufficient protection and segregation of backup infrastructure and credentials.
This turned a one‑off crisis into a concrete plan to reduce the likelihood and impact of the next incident.
RESULTS
1. Clear, shared understanding of “what happened”
Instead of conflicting internal stories, the client gained:
- A chronological reconstruction from the first malicious use of remote‑management tools through to hypervisor encryption and discovery.
- Alignment between IT, security, legal and business leadership on the key facts of the incident.
- A credible, independent narrative that could be shared – under NDA – with regulators, customers, partners and insurers.
This reduced internal friction, cut time spent debating “what actually happened”, and let leadership focus on decisions.
2. Evidence‑based view of data‑theft exposure
The organisation moved from speculation to a balanced, evidence‑based stance:
- Yes, there were high‑confidence attempts at exfiltration (rclone, staging, outbound connections).
- No, there was no evidence of a major data pack being successfully published or even advertised on Qilin’s leak site.
This clarity:
- Informed regulatory and customer notifications.
- Helped shape PR and communications around realistic, not hypothetical, impacts.
- Reduced the risk of over‑ or under‑reacting to the incident.
3. Faster, safer recovery of virtualised services
Because DFIR activities were designed to support rather than slow recovery:
- Critical workloads were rebuilt from known‑good baselines, rather than rushed mass restores of everything.
- Backdoors – including unauthorised remote‑access tools and malicious or compromised accounts – were identified during the investigation and removed as part of the rebuild.
- The business resumed operations more quickly and with greater confidence that attacker persistence mechanisms had been eradicated.
4. Real‑world validation of security investments
The engagement showed the client exactly where their security stack delivered value and where it was blind:
- EDR successfully blocked some Qilin payloads and a malicious loader DLL – evidence that security investments were not wasted.
- At the same time, BYOVD methods, credential theft against backups and hypervisor‑level targeting revealed blind spots traditional endpoint‑centric security cannot easily see.
This turned the incident into a data‑driven input for future budgeting and architecture, rather than a vague justification for “spend more”.
5. Actionable hardening priorities for similar environments
From this case, the client walked away with a focused set of priorities:
- Extend monitoring and hardening to ESXi and other hypervisors, not just guest operating systems.
- Increase log retention and centralisation on firewalls and VPNs.
- Harden and monitor backup platforms and credentials, including salts, service accounts and management consoles.
- Rationalise and govern remote‑management and remote‑access tools so every path is authorised, monitored and revocable.
- Reduce reliance on a handful of powerful admin accounts via segregation of duties and least‑privilege.
These recommendations were specific to how this attacker operated, not a generic checklist.
FAQ: Hypervisor‑Level Ransomware & DFIR
Q1. What is a hypervisor‑level ransomware attack and why is it so disruptive?
A hypervisor‑level ransomware attack targets the virtualisation layer (e.g. VMware ESXi) rather than just individual guest servers. When hypervisors are encrypted or disabled, many virtual machines go down at once. That creates a multi‑system outage and can also cut off visibility from EDR agents that sit on those VMs, making both recovery and investigation harder.
Q2. Can DFIR still help if the hypervisors and many logs are encrypted or missing?
Yes. In this case, we combined EDR telemetry, surviving system artefacts, network device logs and carefully chosen backup restores to reconstruct the attacker’s activity. Even with gaps (e.g. missing firewall logs for earlier days), it was still possible to build a defensible timeline, attribute activity to Qilin tradecraft, and assess data‑theft risk. The key is rapid evidence preservation and a DFIR team that understands how to work around incomplete data.
Q3. How did Blackpanda assess whether data was exfiltrated?
We correlated:
- Host‑level evidence of rclone execution and supporting scripts.
- Network connections from those hosts to external infrastructure.
- Threat‑intel and dark‑web monitoring of Qilin’s leak site.
This triangulation showed clear exfiltration attempts but no sign of a large‑scale published data pack, leading to a measured conclusion on data‑theft risk rather than pure guesswork.
Q4. What security improvements came out of this engagement?
The client used the DFIR findings to prioritise:
- Monitoring and logging for hypervisors and backups.
- Longer retention for firewall and VPN logs.
- Stronger governance around remote‑access tools and privileged accounts.
- Specific tuning of EDR and backup configurations based on observed attacker techniques.
These changes directly address how the attacker moved and where controls were bypassed, making them much more impactful than generic recommendations.
Q5. How can organisations prepare for a similar ESXi ransomware incident?
Key steps include:
- Ensuring EDR or equivalent telemetry is widely deployed and centrally monitored.
- Implementing robust, segregated backup and recovery with tested restore procedures.
- Enforcing least‑privilege access and strong credential hygiene, especially for admin and backup accounts.
- Governing remote‑access tools and regularly reviewing which systems they touch.
- Running tabletop exercises and readiness assessments focused on virtualisation and backup compromise scenarios.
Next step: turn this into your playbook
This anonymised case study is based on a real 2025 Blackpanda engagement where a Qilin affiliate targeted VMware ESXi hypervisors and backup infrastructure at a large regional manufacturer.
If your organisation relies heavily on virtualisation and centralised backups, the same attack patterns apply. Blackpanda can help you:
- Prepare before an incident with readiness reviews and playbooks.
- Respond during an incident with 24/7 DFIR support.
- Strengthen your environment afterwards with a targeted hardening roadmap.
Speak to us today.






