Simple, measurable ATT&CK testing with Atomic Red Team

less than 1 minute read

This Google Sheets template aims to make it easy to perform simple, measurable testing of MITRE ATT&CK techniques using Atomic Red Team or an adversary emulation solution of your choosing.


To get started:

  1. Choose the technique that you wish to test. To help prioritize your testing, incorporate rankings from public threat reports, your own intelligence, or any other mechanism that you choose. The top techniques from Red Canary’s annual Threat Detection Report are incorporated into this template for convenience.
  2. Select a corresponding Atomic Red Team test. You can search for or browse tests by tactic, technique, or target platform here.
  3. Document whether the test was:

    • Observed in any manner, from system logs to network events
    • Detected by any of your controls, which could be SIEM analytics, alerts from any of your security products, or activity detected by partners or service providers
    • Mitigated by an existing secuirty control

This is based on the “ATT&CK in Excel” spreadsheet provided by MITRE, one of several resources for working with ATT&CK. It is based on the ATT&CK v12.1 release.

Get the template!

Discussion on LinkedIn, Twitter

The most prolific ransomware groups in 2022

1 minute read

It’s 2023 and security firms are starting to release findings from 2022 threat data, notably their lists of the most active, impactful ransomware groups.

As with all threat reports, the findings and prevalence are subject to each firms’ visibility, methodology, etc. The data isn’t perfect and it’s not particularly actionable on its own, but it’s interesting and in aggregate can be a useful starting point for other analysis.

The leaderboard

This is not the product of any intelligence analysis. I am but an aggregator and this leaderboard is the product of “assign points based on individual rankings then count the points” by way of this spreadsheet.


Read on for some thoughts re: how this type of data can be useful, followed by summary data and charts from each report.

The case for imperfect threat data

A good use case for these types of lists–and a way to make them actionable–is to look at tactics starting with initial access and progressing through the intrusion lifecycle. For each tactic, look for common vectors and MITRE ATT&CK techniques (some of this is readily available in the source reports below). The goal is to see whether we can glean good enough insights and do it quickly, assess risks, and take preventative measures.

This sounds simple and obvious because it is. The short list of “how ransomware happens” isn’t terribly long and few TTPs are novel. Unfortunately, it’s easy and common for teams to get wrapped up in the minutiae of threat modeling, risk quantification, etc.–perfect over plenty good enough–and in doing so they waste valuable time addressing high impact, high likelihood risks that are staring us right in the face.

Reports and summary data

IBM X-Force

  1. LockBit
  2. Phobos, WannaCry (tie)
  3. ALPHV/BlackCat
  4. Conti, Djvu, Babuk (tie)



  1. LockBit 2.0
  2. LockBit 3.0
  3. ALPHV
  4. Black Basta



  1. LockBit
  2. ALPHV
  3. Hive
  4. Black Basta



  1. LockBit
  2. BlackCat
  3. Phobos
  4. Conti


Cisco Talos

  1. LockBit
  2. Hive
  3. Black Basta
  4. Vice Society


Recorded Future

  1. LockBit
  2. Conti
  3. Pyaa
  4. REvil



  1. LockBit
  2. BlackCat
  3. Hive
  4. Conti



  1. LockBit
  2. Black Basta
  3. Hive
  4. BlackCat/ALPHV

Discussion on LinkedIn, Twitter

Incidents as a measure of cybersecurity progress Permalink

3 minute read

Phil Venables published a helpful collection of ways that risk and cybersecurity leaders can share their successes, ideally on an ongoing basis. His working theory, which I believe is correct, is that we’re not great at this. And as a result, many of our peers only hear from us when things go sideways, which leads to a variety of problems.

His first suggestion is aptly focused on incidents:

The classic case is incidents. Your main moment in the sun might be in the middle of an incident. If successfully detected and recovered from then you will likely get some kudos. But, too many of these and leadership might conclude you’re not doing an effective job. However, you can provide a better perspective if you place these incidents in some context, such as the percentage of incidents experienced vs. incidents that were avoided because threats were thwarted or risks were mitigated. Of course, this can require some degree of subjectivity depending on how you measure it. You could use a regularly communicated set of messages such as: this month our controls stopped 100,000+ phishing emails, repelled 200,000+ perimeter intrusion attempts, deflected 150,000+ malware infection attempts, and so on vs. only reporting the incidents. In the event of a truly sophisticated intrusion or other circumstance then those incidents might be better seen as what they are, that is an unusual event, not an event that is thought to happen upon every intrusion attempt.

Understanding how to think and talk about incidents is critically important. Among other things, incidents are a foundational measure of control effectiveness, continuous improvement, and overall operational workload in security operations.

I’ve encountered this challenge a number of times over the years in organizations big and small. A simplified version of my approach to incidents:

  1. Define “incident”: It should be clear to everyone when an incident occurs. Important so that you can respond, obviously, but also so that you know when to capture relevant data and information.
  2. Define incident severity levels: A typical model ranges from level 5 (least severe, think “a control prevented delivery of a phishing email”) to level 1 (most severe, think “we have to notify our customers and board of directors”).
  3. Have a simple, repeatable, measured incident management process: Determine where you’ll capture data and information related to incidents, and your workflow from initial documentation, response, and post-incident analysis.

If you do these three things, you’re positioned to respond, measure, improve, and communicate incident-related information. But to gain (and ultimately share!) useful insights, you have to ask the right questions and look at the data in useful ways. A few aspects of incidents that I’ve found useful for gaining insights and reporting include:

  • Overall number of incidents: This is a performance indicator, not a success measure. Think of it as the building block or denominator for most other incident-related reporting. That said, there’s plenty to be learned from the numbers alone. For example, it’s okay to have a lot of incidents, particularly if they’re lower severity and you’re learning from them and making improvements. Conversely, having very few incidents might be cause for concern, as it might be a sign that incidents aren’t being detected or properly documented.
  • Incidents by root cause: One of the most useful data points to capture during post-incident ananalysis are root causes. In general, repeat root causes aren’t ideal and are indicators that you want to take some preventative action. You’ll also learn that getting to a single root cause isn’t always easy.
  • Incidents by severity: If you have 100 incidents in January, and 25% are higher severity, it’s probably a positive sign if you still have 100 in February but only 10% are higher severity.

There are many more, and far better examples. But in general, insights gleaned from incidents–trends in particular–are one of the most useful means of assessing operational maturity and making meaningful improvements to any system. As a cybersecurity leader, you can’t get too good at understanding and being comfortable talking about incidents and what they mean to your team and organization.

LastPass: The breach that keeps on giving Permalink

1 minute read

LastPass was breached in August, and has since updated their breach disclosure several times, each update just a little bit worse and more concerning than the last. Unfortunately, for a business with a large consumer customer base, it’s almost impossible to use these disclosures to determine whether LastPass should be trusted. For security practitioners, it’s much eeasier:

The cloud storage service accessed by the threat actor is physically separate from our production environment.

Unless there are zero employees or systems having access to both cloud storage and production, and there are never zero employees or systems with access to both, this statement may be technically accurate but is a clear lie of ommission.

And then there’s these two statements, which are together terrifying:

[T]he threat actor copied information from backup that contained basic customer account information and related metadata including company names, end-user names, billing addresses, email addresses, telephone numbers.

The threat actor was also able to copy a backup of customer vault data from the encrypted storage container which is stored in a proprietary binary format that contains both unencrypted data, such as website URLs, as well as fully-encrypted sensitive fields such as website usernames and passwords, secure notes, and form-filled data.

Set aside the fact that the threat actor has everyone’s vault to sort, prioritize, and attack at their leisure. They also have each customer’s email address, mailing address, telephone number, and a convenient list of services used. Combine this with data and information from unrelated breaches, and this is a targeting bonanza.

No one’s perfect, but this is lucky number seven for LastPass as of this writing. It’s time to suggest to those who trust you that they should no longer trust LastPass.