An open source catalog of offensive security tools Permalink

less than 1 minute read

From Gwendal Le Coguic (@gwen001 / @gwendallecoguic), offsec.tools is a fairly wide-ranging collection of offensive security tools. At the time of publication, it includes close to 700 tools, though some very popular free tools (e.g., mimikatz, impacket) are missing, and the project’s appetite for cataloging commericial tools (e.g., Pegasus, FinFisher, etc.) is unclear.

Roundup of commercial spyware, digitial forensics technology use by governments Permalink

less than 1 minute read

From the Carnegie Endowment for International Peace:

The dataset provides a global inventory of commercial spyware & digital forensics technology procured by governments. It focuses on three overarching questions: Which governments show evidence of procuring and using commercial spyware? Which private sector companies are involved and what are their countries of origin? What activities have governments used the technology for?

The leaderboard is interesting, albeit predictable.

Simple, measurable ATT&CK testing with Atomic Red Team

less than 1 minute read

Updated June 2024 to support ATT&CK v15.1.

This Google Sheets template aims to make it easy to perform simple, measurable testing of MITRE ATT&CK techniques using Atomic Red Team or an adversary emulation solution of your choosing.

alt

To get started:

  1. Choose the technique that you wish to test. To help prioritize your testing, incorporate rankings from public threat reports, your own intelligence, or any other mechanism that you choose. The top techniques from Red Canary’s annual Threat Detection Report are incorporated into this template for convenience.
  2. Select a corresponding Atomic Red Team test. You can search for or browse tests by tactic, technique, or target platform here.
  3. Document whether the test was:

    • Observed in any manner, from system logs to network events
    • Detected by any of your controls, which could be SIEM analytics, alerts from any of your security products, or activity detected by partners or service providers
    • Mitigated by an existing secuirty control

This is based on the “ATT&CK in Excel” spreadsheet provided by MITRE, one of several resources for working with ATT&CK. It is based on the ATT&CK v12.1 release.

Get the template!

Discussion on LinkedIn, Twitter

The most prolific ransomware groups in 2022

1 minute read

It’s 2023 and security firms are starting to release findings from 2022 threat data, notably their lists of the most active, impactful ransomware groups.

As with all threat reports, the findings and prevalence are subject to each firms’ visibility, methodology, etc. The data isn’t perfect and it’s not particularly actionable on its own, but it’s interesting and in aggregate can be a useful starting point for other types of analysis.

The leaderboard

This is not the product of original intelligence analysis. I’ve aggregated data from reports that contain ransomware group rankings, assigned points based on relative ranking, and this is the result.

alt

Read on for some thoughts re: how this type of data can be useful, followed by summary data and charts from each report.

The case for imperfect threat data

A good use case for these types of lists–and a way to make them actionable–is to look at tactics starting with initial access and progressing through the intrusion lifecycle. For each tactic, look for common vectors and MITRE ATT&CK techniques (some of this is readily available in the source reports below). The goal is to see whether we can glean good enough insights and do it quickly, assess risks, and take preventative measures.

This sounds simple and obvious because it is. The short list of “how ransomware happens” isn’t terribly long and few TTPs are novel. Unfortunately, it’s easy and common for teams to get wrapped up in the minutiae of threat modeling, risk quantification, etc.–perfect over plenty good enough–and in doing so they waste valuable time addressing high impact, high likelihood risks that are staring us right in the face.

Reports and summary data

Coalition, Inc

  1. ALPHV/BlackCat
  2. LockBit
  3. Royal
  4. Hive

alt

IBM X-Force

  1. LockBit
  2. Phobos, WannaCry (tie)
  3. ALPHV/BlackCat
  4. Conti, Djvu, Babuk (tie)

alt

Intel471

  1. LockBit 2.0
  2. LockBit 3.0
  3. ALPHV/BlackCat
  4. Black Basta

alt

GuidePoint

  1. LockBit
  2. ALPHV/BlackCat
  3. Hive
  4. Black Basta

alt

Sophos

  1. LockBit
  2. ALPHV/BlackCat
  3. Phobos
  4. Conti

alt

Cisco Talos

  1. LockBit
  2. Hive
  3. Black Basta
  4. Vice Society

alt

Recorded Future

  1. LockBit
  2. Conti
  3. Pyaa
  4. REvil

alt

BlackFog

  1. LockBit
  2. ALPHV/BlackCat
  3. Hive
  4. Conti

alt

TrustWave

  1. LockBit
  2. Black Basta
  3. Hive
  4. ALPHV/BlackCat

Discussion on LinkedIn, Twitter

Incidents as a measure of cybersecurity progress Permalink

3 minute read

Phil Venables published a helpful collection of ways that risk and cybersecurity leaders can share their successes, ideally on an ongoing basis. His working theory, which I believe is correct, is that we’re not great at this. And as a result, many of our peers only hear from us when things go sideways, which leads to a variety of problems.

His first suggestion is aptly focused on incidents:

The classic case is incidents. Your main moment in the sun might be in the middle of an incident. If successfully detected and recovered from then you will likely get some kudos. But, too many of these and leadership might conclude you’re not doing an effective job. However, you can provide a better perspective if you place these incidents in some context, such as the percentage of incidents experienced vs. incidents that were avoided because threats were thwarted or risks were mitigated. Of course, this can require some degree of subjectivity depending on how you measure it. You could use a regularly communicated set of messages such as: this month our controls stopped 100,000+ phishing emails, repelled 200,000+ perimeter intrusion attempts, deflected 150,000+ malware infection attempts, and so on vs. only reporting the incidents. In the event of a truly sophisticated intrusion or other circumstance then those incidents might be better seen as what they are, that is an unusual event, not an event that is thought to happen upon every intrusion attempt.

Understanding how to think and talk about incidents is critically important. Among other things, incidents are a foundational measure of control effectiveness, continuous improvement, and overall operational workload in security operations.

I’ve encountered this challenge a number of times over the years in organizations big and small. A simplified version of my approach to incidents:

  1. Define “incident”: It should be clear to everyone when an incident occurs. Important so that you can respond, obviously, but also so that you know when to capture relevant data and information.
  2. Define incident severity levels: A typical model ranges from level 5 (least severe, think “a control prevented delivery of a phishing email”) to level 1 (most severe, think “we have to notify our customers and board of directors”).
  3. Have a simple, repeatable, measured incident management process: Determine where you’ll capture data and information related to incidents, and your workflow from initial documentation, response, and post-incident analysis.

If you do these three things, you’re positioned to respond, measure, improve, and communicate incident-related information. But to gain (and ultimately share!) useful insights, you have to ask the right questions and look at the data in useful ways. A few aspects of incidents that I’ve found useful for gaining insights and reporting include:

  • Overall number of incidents: This is a performance indicator, not a success measure. Think of it as the building block or denominator for most other incident-related reporting. That said, there’s plenty to be learned from the numbers alone. For example, it’s okay to have a lot of incidents, particularly if they’re lower severity and you’re learning from them and making improvements. Conversely, having very few incidents might be cause for concern, as it might be a sign that incidents aren’t being detected or properly documented.
  • Incidents by root cause: One of the most useful data points to capture during post-incident ananalysis are root causes. In general, repeat root causes aren’t ideal and are indicators that you want to take some preventative action. You’ll also learn that getting to a single root cause isn’t always easy.
  • Incidents by severity: If you have 100 incidents in January, and 25% are higher severity, it’s probably a positive sign if you still have 100 in February but only 10% are higher severity.

There are many more, and far better examples. But in general, insights gleaned from incidents–trends in particular–are one of the most useful means of assessing operational maturity and making meaningful improvements to any system. As a cybersecurity leader, you can’t get too good at understanding and being comfortable talking about incidents and what they mean to your team and organization.