Google DFIQ: Open source building blocks for IR playbooks Permalink

less than 1 minute read

The DFIQ project (GitHub, website) is an open source collection of questions that analysts should ask during certain types of investigations. There’s a simple tagging system that allows a unique question to be associated with platforms, primitives like file or network knowledge, and of course MITRE ATT&CK techniques. Questions are used in the context of scenarios, which are effectively types of incidents.

Example: Cloud Project Compromise Assessment

alt

I’m not sure I can overstate the importance or utility of this project. DFIQ scenarios, facets, and questions are key ingredients used in incident response playbooks, and to have them organized and publicly available is an asset to the DFIR and cybersecurity communities.

Exposure management via CISA’s Top Routinely Exploited Vulnerabilities Permalink

less than 1 minute read

This is textbook example of the type of input you’d apply to an exposure management process:

  1. Take the CISA list, along with others, and overlay these vulnerabilities atop your attack surface. The resultant list are your most at-risk assets.
  2. Remove any assets where you have a strong mitigating control in place.
  3. Patch or otherwise mitigate these vulnerabilities on these assets (really, patch them all, and then consider further mitigations, should the class of attack reappear in the form of another vulnerability down the line).

Note that CISA has produced these lists for three years, but there are related lists (some being the product of CISA’s work combined with other partners). You can find them here: https://www.cisa.gov/search?g=Top%20Routinely%20Exploited%20Vulnerabilities

Incidents as a measure of cybersecurity progress

3 minute read

Phil Venables published a helpful collection of ways that risk and cybersecurity leaders can share their successes, ideally on an ongoing basis. His working theory, which I believe is correct, is that we’re not great at this. And as a result, many of our peers only hear from us when things go sideways, which leads to a variety of problems.

His first suggestion is aptly focused on incidents:

The classic case is incidents. Your main moment in the sun might be in the middle of an incident. If successfully detected and recovered from then you will likely get some kudos. But, too many of these and leadership might conclude you’re not doing an effective job. However, you can provide a better perspective if you place these incidents in some context, such as the percentage of incidents experienced vs. incidents that were avoided because threats were thwarted or risks were mitigated. Of course, this can require some degree of subjectivity depending on how you measure it. You could use a regularly communicated set of messages such as: this month our controls stopped 100,000+ phishing emails, repelled 200,000+ perimeter intrusion attempts, deflected 150,000+ malware infection attempts, and so on vs. only reporting the incidents. In the event of a truly sophisticated intrusion or other circumstance then those incidents might be better seen as what they are, that is an unusual event, not an event that is thought to happen upon every intrusion attempt.

Understanding how to think and talk about incidents is critically important. Among other things, incidents are a foundational measure of control effectiveness, continuous improvement, and overall operational workload in security operations.

I’ve encountered this challenge a number of times over the years in organizations big and small. A simplified version of my approach to incidents:

  1. Define “incident”: It should be clear to everyone when an incident occurs. This is important so that you can respond, obviously, but also so that you know when to capture relevant data and information.
  2. Define incident severity levels: A typical model ranges from level 5 (least severe, think “a control prevented delivery of a phishing email”) to level 1 (most severe, think “we have to notify our customers and board of directors”).
  3. Have a simple, repeatable, measured incident management process: Determine where you’ll capture data and information related to incidents, and your workflow from initial documentation, response, and post-incident analysis.

If you do these three things, you’re positioned to respond, measure, improve, and communicate incident-related information. But to gain (and ultimately share!) useful insights, you have to ask the right questions and look at the data in useful ways. A few aspects of incidents that I’ve found useful for gaining insights and reporting include:

  • Overall number of incidents: This is a performance indicator, not a success measure. Think of it as the building block or denominator for most other incident-related reporting. That said, there’s plenty to be learned from the numbers alone. For example, it’s okay to have a lot of incidents, particularly if they’re lower severity and you’re learning from them and making improvements. Conversely, having very few incidents might be cause for concern, as it might be a sign that incidents aren’t being detected or properly documented.
  • Incidents by root cause: One of the most useful data points to capture during post-incident ananalysis are root causes. Repeat root causes aren’t ideal and are indicators that you want to take some preventative action. Repeat root causes are also a signal you aren’t performing and/or following through with post-incident analysis and related findings.
  • Incidents by severity: Again, it’s okay to experience incidents. Specific to severity, you want fewer higher severity incidents over time. For example, if you had 100 incidents in January and 25% were higher severity, having 100 incidents in February where only 10% are higher severity is a sign of progress.

There are many more, and far better examples. But in general, insights gleaned from incidents—trends in particular—are one of the most useful means of assessing operational maturity and making meaningful improvements to any system. As a cybersecurity leader, you can’t get too good at understanding and being comfortable talking about incidents and what they mean to your team and organization.

Roundup of security conference and CFP trackers

less than 1 minute read

A collection of websites and projects that I’ve used in an attempt to track upcoming information security (infosec) or cybersecurity conferences, including call for papers (CFP) deadlines.

General websites or projects

https://infosec-conferences.com/

InfoconDB

InfoSecMap

Conference Index

Conferences (Cyber), a spreadsheet

CFP specific

Security and Privacy Conference Deadlines

CFP Time

WikiCFP - Best to use the search function, and the combined results of “cybersecurity” and “security” will probably capture most related events and subcategories.

Exposure management - Managing assets, attack surface, attack paths, and vulnerabilities with purpose

2 minute read

There are a variety of cybersecurity product categories and activities intended to reduce the likelihood that an adversary finds and successfully exploits a vulnerability, resulting in an intrusion (and ultimately a breach):

  • Asset management (often including “inventory”)
  • Attack surface management
  • Attack path management
  • Vulnerability management
  • And more . . .

In thinking about how all of these fit, and in particular what a vertically integrated product or activity might look like, I find it helpful to think about the attributes or unique set of attributes that matter the most. In the context of assets that comprise our attack surface, any of which may have a number of points of vulnerability and thus be subject to attack, the attribute that seems to matter most is exposure.

alt

You can have assets rife with vulnerabilities, but through a variety of controls make those vulnerabilities inaccessible enough (or contain the system such) that you have mitigated or deprioritized the risk. At the same time, you can have an asset with a single esoteric vulnerability, and if that vulnerability is exposed you can end up in a very serious situation.

Exposure management is the act of looking at your attack surface in the proper context to identify the subset of assets that are exposed–meaning that a vulnerability is exploitable by an adversary—so the corresponding risk can be mitigated or remediated. For example:

  • Threat intelligence is used to identify the highest likelihood and highest impact threats, and to ensure that risks to assets are tracked, understood, and actioned if necessary. It’s worth noting that exploitability of a vulnerability is a critical aspect of threat intelligence for this particular purpose. For example: Knowing that ransomware is a threat is obviously helpful. Knowing that a ransomware actor gains access through a software vulnerability with a publicly or commercially available exploit meaningfully impacts prioritization.
  • Data from technical controls will help to make determinations about which assets and corresponding vulnerabilities are reachable in the first place and/or whether there are mitigations in place. There’s a huge opportunity to better correlate data from endpoint, cloud, and application security products to make these determinations.
  • Lastly, but arguably most important, is using every piece of data and information at your disposal to ensure that you have the most complete picture of your attack surface. Unmanaged assets are the adversary’s plaything.

Naturally, the productization of this concept in any vertically integrated manner–meaning combined data, information, and intelligence spanning each of these areas–would move the needle in a manner that is more actionable, leading to better outcomes likely at a lower cost.

And in general, instead of trying to optimize for completeness or maturity in any or all of these areas on their own, consider how you might do just enough of each with a high degree of coordination, and in doing so actively and effectively reduce risk by managing your exposure.

Discussion on LinkedIn, Twitter