Why Breach and Attack Simulation Is the Future of Security Validation
- Markus Vervier
- 2 hours ago
- 4 min read
Yesterday's news told the same story we hear every week: another supply chain attack, another NPM compromise, another reminder that traditional security approaches aren't keeping pace with reality. This time it was the Shai Hulud worm infecting thousands of GitHub projects. But here's the thing, this exact attack happened 69 days ago. And 4 years ago. Same technique, same exploitation path, just more sophisticated execution.
The security industry loves to panic after each breach, sending out alerts to "rotate all secrets immediately" and "monitor your environment." But how exactly do you monitor for something when you don't know if your controls would even detect it?

The OWASP Wake-Up Call Nobody's Talking About
Let's be direct: most organizations today are not getting breached through sophisticated zero-day exploits. They're getting compromised through misconfigurations, overly broad permissions, and basic security controls that nobody validated actually work.
Even in the area of AppSec, the OWASP Top 10 2025 validates this reality. Security Misconfiguration jumped to #2. Supply Chain Failures expanded beyond just vulnerable components to comprehensive process failures. The message is clear: we're not losing the war because of bad code, we're losing because we can't validate our increasingly complex infrastructure works as intended.
We've seen this repeatedly in the field. Solid applications with parameterized queries, proper input validation, comprehensive authentication checks, all sitting behind a publicly accessible S3 bucket. Perfect code, catastrophic configuration.
Why Traditional Security Validation Falls Short
Here's what most organizations do for security validation:
Annual penetration tests: Pay for two weeks of expert time, get a report, fix some issues, then watch your infrastructure change 500 times before the next test, starting a day after the pentest was concluded even.
Vulnerability scanning: Drowning in thousands of alerts, most false positives, none telling you if your actual defenses would stop a real attack. You know you have vulnerabilities. But would your EDR actually prevent their exploitation? Also when they are chained together and rapidly executed?
Compliance checklists: : Proving you have policies, not that the controls they mandate actually work. Perfect for paper-compliance, useless against attackers. Yet in 2025 liability doesn’t stop at paper when a breach actually happens.
The Control Validation Gap
The real question isn't "what vulnerabilities exist?" It's "do my controls work?"
When LockBit starts using a new wave of using stolen credentials for lateral movement, you need to know immediately: Will my MFA configuration prevent the stolen credentials from being useful? Will my network segmentation stop the lateral movement? Will my EDR detect and block the execution? Is my DFIR process effective enough to find out what actually happened?
A vulnerability scan can't answer these questions. Neither can a checklist. You need to actually test the attack chain against your actual defenses.
This is the gap Breach and Attack Simulation fills.
How BAS Changes the Game
We deploy lightweight agents in your environment, Windows, Linux, cloud, wherever you need validation. These agents execute real attack techniques in a controlled, safe manner. Not theoretical vulnerabilities, but actual attack chains mapped to MITRE ATT&CK.
The magic is in the continuous validation. Does your Kubernetes environment change daily? Test daily. A new ransomware variant emerges? Test within hours, not weeks. Each test gives you binary proof: the control prevented the attack, or it didn't. No ambiguity.
Here's what we demonstrated in our recent webinar in collaboration with Commsec: taking yesterday's NPM worm attack and creating a simulation within hours. Organizations could test whether their specific configurations would have prevented the attack, before another version of the worm tries again.
From asking "what could go wrong?" to proving "our defenses work."
The current multi-billion dollar security industry is focused around finding problems. But as one webinar attendee noted, "How does this complement vulnerability management?" The answer: vulnerability management tells you what could be exploited. BAS tells you what would actually work.
This distinction matters. We've seen environments with thousands of "critical" vulnerabilities where the actual risk was minimal because egress filtering worked perfectly. Conversely, we've seen "secure" environments where one misconfigured GitHub Action could compromise everything.
What We're Seeing in Real Deployments
Organizations using BAS report immediate value:
Change validation: "After patching, we know within hours if our defenses actually improved or if we introduced new gaps."
Compliance evidence: "DORA requires proof controls work for the purpose they were defined for. We provide continuous evidence, not annual snapshots."
SOC optimization: "We can validate detection rules against actual attack variations, not hope they catch the next variant."
Risk prioritization: "That critical CVE? In our environment, it's unexploitable due to network segmentation. We can prove it."
The Future Is Already Here
Three forces make BAS inevitable:
Regulatory pressure: DORA, NIS2, and emerging frameworks demand evidence of control effectiveness, not just control existence.
Attack evolution: Modern attacks chain multiple techniques across cloud, on-premise, and SaaS environments. Point-in-time testing can't keep pace.
Economic reality: There aren't enough skilled pentesters to manually validate every organization continuously. Automation is the only scalable answer.
Moving from Reactive to Proactive
The cycle of breach→panic→cleanup→hope isn't sustainable. Neither is the security theater of annual tests and vulnerability counts.
Real security means knowing your controls work before attacks test them in production. It means validating changes before they create gaps. It means having evidence, not assumptions.
Don't Wait to Be Tomorrow's Headline
This weeks’s NPM attack was version 2 of an attack from 69 days prior. The attackers learned from their mistakes, refined their approach, and tried again. They'll keep iterating.
The question is: will you be validating your defenses continuously, or waiting for version 3 to test them in production?
The technology exists. The need is clear. The only variable is how quickly organizations move from reactive cleanup to proactive validation.