Breaches are more like plane crashes than car crashes

The Swiss Cheese Model

This was taken verbatim from LinkedIn post that I wrote in a fit of near-rage after seeing my 1000th vendor claim that human error causes most breaches. There’s more to say about this, and a more constructive way to say it, but perhaps another time.

Saying “most breaches are due to human error” is tired, ill-informed, unhelpful framing. If true, only on a technicality, and wildly misleading.

People make mistakes every day. Very, very few of the mistakes people make that are associated with security incidents are rare, unknown, or exceptional failure modes. They’re mostly things we know and expect to happen, but that we haven’t taken enough care to prevent (including not adding sufficient friction, where we can’t prevent the thing outright).

Got ransomware? Was the initial access vector email, specifically phishing? Was the root cause the user opening the email and falling for the lure? No way. Never. The root cause was the ability to run arbitrary code or software on a system, instead of using application control (mostly free, but admittedly free like a puppy, not like beer). Or maybe the root cause was lack of MFA or a lesser MFA implementation that isn’t sufficiently phishing resistant.

Breaches are more like plane crashes than car crashes: They don’t happen in an instant. A number of things have to go wrong, and a number of opportunities to avoid or lessen the impact have to be missed.

Implying that one of these things is primarily to blame mischaracterizes the problem, and shows a general lack both understanding of how breaches occur, and of basic systems thinking.

A simple framework for talking to leadership about a cybersecurity program, particularly for an inbound security leader looking to level-set:

  1. Calibrate (ideally visually) to the state of the program today: Existing level of investment, key controls and functions, etc.
  2. Highlight prevailing risks, and your best estimate of the cost associated with each, if realized.
  3. Propose changes: Investments you want to make, to address risks in #2, leading to . . .
  4. Provide a projected future state of the program: What would the state of the program look like if investments from #3 are made? How would prevailing risks change?

It’s oversimplified, and that’s intentional. You can add depth or show work in any area, if it’s attainable. But for a lot of organizations, this is still a lot of work and an acceptable level of rigor.

August 29, 2024

Where do your incidents come from?

How do you identity your highest severity incidents in the first place? Which data sources or products deliver the investigative leads? And which are critical to detection and response? These questions speak directly to the value of data sources, products, and functions.

What are the prevailing root causes? This speaks directly to the quality of incident management, particularly post-incident review.

I can’t stress enough the importance of being able to answer these questions as the leader of any operational team (cybersecurity, technology, or otherwise).

August 28, 2024

SaaS attack technique matrix Permalink

Inspired by MITRE ATT&CK, the good folks at Push Security have taken a pass at enumerating attack techniques specific to Software-as-a-Service (SaaS) applications.

ATT&CK is a great and robust framework, and I love to see it adapted to capture techniques and tactics for different types of systems.

Push Security SaaS Attacks example

Your SIEM is only as valuable as you have time to ask it questions.

Do you have time to identify and ask the right questions? And most importantly, do you have time to sift through the results to find answers?

August 22, 2024