Introducing Atomic Scorecard: A test tracking tool for ATT&CK + Atomic Red Team

alt

If you haven’t tested it, it doesn’t work. This is a foundational thesis that led to the creation of the Atomic Red Team project, and the concept of “atomic testing” for cybersecurity teams. As the project has been integrated into myriad tools and processes, one thing we’ve learned is that testing should be approached more like exercise than an exam. Even a small amount of regular testing pays far larger dividends than annual or “big bang” red team engagements.

One way to encourage ongoing testing is a framework for tracking, scoring, and measuring tests and test outcomes. For some time, I’ve maintained a crude spreadsheet that can be used to record and score atomic tests. In the spirit of making this a bit accessible, I took a crack at converting this venerable spreadsheet into a web-based tool.

What is it?

At its core, Atomic Scorecard is a simple system of record for atomic tests. Like Atomic Red Team, it uses MITRE ATT&CK as the foundation, but it overlays industry threat intelligence, and naturally makes it easy to find atomic tests relevant to each technique.

No account is needed. There’s no database or other backend. None of your test data is stored.

Intelligence-driven prioritization

The single most common hangup related to ATT&CK is that it’s expansive, and it’s not easy to figure out where to start. Relatively few organizations produce enough first-party threat intelligence to know which techniques are most important to defend against, and even then, this isn’t necessarily representative of the techniques that present the most risk. What we do know is that not all techniques are created equal—some are far more prevalent than others. From Red Canary’s 2026 Threat Detection Report:

[A] relatively small number of techniques play a role in a disproportionately large number of detections . . . [O]ver the last five years, we’ve detected at least one of the 10 most prevalent techniques in 46 percent of all detections. Over the same time period, we detected at least one of the top 20 techniques in 63 percent of detections.

By default, technique rankings are based on Red Canary’s annual Threat Detection Report, representing the most prevalent techniques observed across thousands of companies of every size and industry. Also included are Mandiant’s top techniques and subtechniques, as well as the complete M-Trends appendix, which provides the top techniques observed for each ATT&CK Tactic.

That said, there are lots of useful sources of threat intelligence, and every company, environment, and set of priorities is unique. So, you can easily upload your own custom rankings to reflect the specific threats your organization faces.

Integration of ATT&CK + Atomic Red Team

The tool is built to move you from documentation to execution in seconds:

  • Every technique is linked directly to the official MITRE ATT&CK documentation
  • For any technique where an Atomic Red Team test exists, a clickable logo appears that takes you directly to tests that correspond to that technique

I recommend using the Invoke-AtomicRedTeam framework, which makes test selection, execution, and optionally things like prerequisites and cleanup fast and easy.

Tracking and reporting

Testing is less impactful if you don’t record and measure the results. For every technique that you test, you can categorize test outcomes into one of four states:

  • Missed: The attack went completely unnoticed.
  • Observed: You saw the telemetry, but no alert was triggered.
  • Detected: You were alerted to the activity.
  • Mitigated: The attack was blocked or interdicted by existing controls.

alt

You can also add notes related to a given technique, since a simple status may not capture important context, or mark a technique as not applicable to your environment.

A simple dashboard at the top makes it easy to see your test coverage and outcomes.

alt

Flexibility and customization

alt

To ensure this tool stays relevant as ATT&CK, Atomic Red Team, and your priorities evolve, the Maintainer tools allow you to update or customize:

  • ATT&CK version
  • Atomic Red Team coverage
  • Technique ranking

There’s also a simple JSON-based backup and restore capability. Export your entire project as a JSON structure at any time. When you’re ready to resume, just import the file and pick up exactly where you left off.

Share your feedback

If there’s something you’d like to see that isn’t included, something isn’t working, or if you’d just like to send some feedback, you can reach me via email: kwm @ this domain.

Ready to start testing? Give it a go at https://atomicscorecard.com

Customer discovery questions (or, better alternatives to “What keeps you up at night?”)

Whether you’re a founder, in sales, an account manager, or in almost any other customer-facing role, the most valuable thing you can do is ask your customers questions, and learn what their goals, incentives, and measures look like.

This is a short list of questions I’ve found lead to substantive discussion (these are geared toward cybersecurity teams, but most are broadly applicable):

Q: How is your team measured?

What I’m listening for:

  • Objectives, ideally those that roll up and support the broader organization
  • Cybersecurity maturity models or frameworks (NIST CSF, CMMC, C2M2, etc.)
  • Compliance audits or certifications (SOC 2, ISO 27001, FedRAMP, etc.)
  • Risk measures, commonly specific to realized risks (exposure or vulnerability management, third-party risk, etc.)
  • Incident measures related to detection, investigation, containment, and response
  • Other basic operational measures, like tickets or cases

Q: Where do your incidents come from?

This is a simple question, but sometimes it lands. If it’s helpful to follow with some elaboration, consider:

  • Q: What controls are most useful in helping you identify higher severity incidents?
  • Q: What data or tools do you find most useful for investigation? Response?

These can lead to useful insights related to control effectiveness, operational maturity, and incident management. A good organization can tell you how many incidents they have; a great team can speak to trends related to root cause, severity, cost, mean time to detect/respond, and more. Teams that are exceptional at incident management will use incidents as a key lever for driving continuous improvement and change.

Q: What does your roadmap look like for the coming months or year?

Here I’m listening for initiatives that:

  • Align with what we do today, where we can satisfy the requirement or meaningfully accelerate progress
  • Are on our roadmap, as this helps with prioritization, and reinforces that we have some shared vision
  • Include tooling consolidation or platform migrations, which can indicate a natural entry point, or a risk if the consolidation cuts you out
  • Aren’t on our radar at all, particularly those that factor into competitive losses

Q: If you could add a single skillset to your team today, what would it be? If you could add an entire team, what would you have them do?

What do they know they want? Usually, they’ll frame it around a specific, acute problem they can’t solve or solution they can’t build in-house.

Note: There’s a subtle but important difference between this type of question and “What causes you to lose sleep?” When asked about fears, a mature team will probably name a specific threat or risk. You can then explain how your product addresses it and hope they connect the dots — but unless that fear is your primary point of value, you’ve gone down a rabbit hole and likely missed the broader product story.

Assorted things I’ve read, watched, or listened to:

  1. A fighting retreat - An email from Will Wilson (CEO and co-founder of Antithesis) to his company, on delaying the inevitable change that occurs when a startup experiences significant growth.

  2. AI as tradecraft: How threat actors operationalize AI - A solid roundup of adversaries’ various uses for AI throughout the intrusion lifecycle.

  3. Deception and Detection: Why Artificial Intelligence Empowers Cyber Defense over Offense - “Rather than heralding a revolution, AI automation is likely to further tame cyber conflict. Highly skilled human operators, not AI, will be necessary to avoid being detected by AI-empowered defenders.”

March 15, 2026

Assorted things I’ve read, watched, or listened to:

  1. $ Why AI Is ‘Not Particularly Good’ at Curing Disease (Plus: The Next GLP-1 Boom and Why America Hates Big Pharma): A wide-ranging interview with Dave Ricks, the CEO of Eli Lilly - Pound for pound, Derek Thompson may be my most valuable Substack or newsletter subsription. I learned so much from this.

  2. The Brand Age - “One obvious lesson is to stay away from brand. Indeed it’s probably a good idea not just to avoid buying brand, but to avoid selling it too. Sure, you might be able to make money this way — though I bet it’s harder than it looks — but pushing people’s brand buttons is just not a good problem to work on, and it’s hard to do good work without a good problem.”

  3. The Signal: The cybersecurity economy, charted. - “Real-time venture funding, M&A, and market intelligence across thousands of companies and investors in the global cybersecurity industry. The market intelligence platform behind the Return on Security briefing.”

  4. The Hidden Cost of Hard-to-Fire Labor Laws: Why European Firms Don’t Take Risks - Related to items I shared back in February.

March 11, 2026

Assorted things I’ve read, watched, or listened to:

  1. How will OpenAI compete? - “OpenAI has some big questions. It doesn’t have unique tech. It has a big user base, but with limited engagement and stickiness and no network effect. The incumbents have matched the tech and are leveraging their product and distribution. And a lot of the value and leverage will come from new experiences that haven’t been invented yet, and it can’t invent all of those itself. What’s the plan?”

  2. Be a thermostat, not a thermometer - Solid life advice, disguised as employment (and management) advice.

  3. Time to Move On – The Reason Relationships End

February 24, 2026