Breaking down exposure management

While participating in some industry analyst research, I was asked how I’d walk someone through and connect these concepts. This is a paraphrased version of the talk track (with some visuals based on prior work).

I’ve continued to think about the convergence of a number of foundational security activities into a more holistic exposure management practice (and ultimately, exposure management products). Those activities include, asset, attack surface, attack path, and vulnerability management, among others. In this case, we’ll look at them hierarchically.

Situational awareness: Assets and attack surface

alt

Asset management is foundational to information technology governance and cybersecurity hygiene. We often say that we can’t detect what we can’t see, but this overlooks the fact that we can’t do anything at all to protect an asset that we don’t know about in the first place. Asset management takes into account the subjects of processes like procurement, ownership, maintenance, periodic inventory or other forms of discovery, and more. Of course, not all assets that an organization will manage are cybersecurity or information assets, but it’s better to be greedy when it comes to intake of asset data, so that you can exclude assets that aren’t in scope versus making assumptions.

Every asset management activity is an opportunity to identify and manage your attack surface.

alt

Attack surface management involves continually validating the presence, state, and key attributes of assets you know about, and actively seeking assets you don’t yet know about or understand (i.e., assets not captured as part of asset management processes). The goal of this activity is to enrich your asset data set with the information needed to know what needs protecting, and ultimately to continuously take the steps needed to protect at-risk assets.

Your security controls are part of your attack surface!

Attack surface management may be best thought of as an umbrella activity within which a number of lower-level, highly conextual activities exist.

Context and depth: Attack paths, vulnerabilities, intelligence, and more

alt

Attack path management, which starts with how an adversary might gain initial access to any part of our attack surface, and progresses into understanding the various ways they can progress by way of identities, applications, systems, networks, and more. This can range from very simple to very in-depth:

  • Elementary: Understanding asset accessibility. For instance, are assets accessible only via internal networks, or are they connected directly to the Internet? Are web-based portals also protected by key material, or can anyone access the login screen?
  • Intermediate: Understanding asset components and behavior? What does the system look like when observed under normal operating conditions? What user, application, or underlying platform behaviors might be present if it is under attack?
  • Advanced: Understanding the asset in the context of a holistic threat model, including whether the system is subject to targeting by adversaries with exceptional capabilities or motivations.

Vulnerability management, best thought of as a peer to attack path management, as most initial access stems from some identified and manageable class of vulnerability, including software, configuration, process, or human/user.

Good attack surface management solutions also incorporate ongoing activities like network scanning and cloud posture management (technically a form of configuration vulnerability management). They also includes oversight activities like penetration testing and red teaming, aimed at validation assumptions about your attack surface and your ability to defend it against adversaries.

Ultimately, the synthesis of all of this data integrated with high quality threat intelligence, to help you prioritize what’s likely and understand what’s possible, should result in understanding which of your assets are most exposed so that you can take action to mitigate risk.

alt

Dave Aitel (and friends) on BlackHat, DEF CON, and infosec culture Permalink

The Vegas security conferences used to feel like diving into a river. While yes, you networked and made deals and talked about exploits, you also felt for currents and tried to get a prediction of what the future held. A lot of this was what the talks were about. But you went to booths to see what was selling, or what people thought was selling, at least.

But it doesn’t matter anymore what the talks are about. The talks are about everything. There’s a million of them and they cover every possible topic under the sun. And the big corpo booths are all the same. People want to sell you XDR, and what that means for them is a per-seat or per-IP charge. When there’s no differentiation in billing, there’s no differentiation in product.

I sat out BlackHat and Defcon this year, for the first year in many, and I’ve spent much of the past week catching up with folks who did go and getting their perspective. The recurring theme has been “it’s different”, but I think I would have told someone the same last year, or the year before that.

My day-to-day interests aren’t what they were nearly 20 years ago when I started working in the information security industry. I’m much more interested in product and operations, in particular how we think about outcomes, value, and ultimately driving defenders’ costs down while driving adversary costs (way) up.

That said, I still get enjoyment from novel research and deep technical topics despite feeling like I have to work much harder to understand them. And more so than the content, I have a deep appreciation for the relatively small number of folks who led our industry for decades, most or all of them with deeply technical backgrounds and expertise, having built tools or technologies that are still considered foundational to this day. Industry conferences are a bellwether for the broader industry, the skills and talent that we develop, and the products or solutions we build. And so I am always very interested in how those who have seen infosec grow from a hacker-centric counterculture into a thriving industry perceive conferences in particular.

Dave’s post to the venerable Dailydave mailing list struck a chord with me, as it clearly did with others, several of whom are on my short list of industry giants. The discussion is well worth a read.

If you’ve been in this business for a while, you have a dreadful fear of being in your own bubble. To not swim forward is to suffocate.

Google DFIQ: Open source building blocks for IR playbooks Permalink

The DFIQ project (GitHub, website) is an open source collection of questions that analysts should ask during certain types of investigations. There’s a simple tagging system that allows a unique question to be associated with platforms, primitives like file or network knowledge, and of course MITRE ATT&CK techniques. Questions are used in the context of scenarios, which are effectively types of incidents.

Example: Cloud Project Compromise Assessment

alt

I’m not sure I can overstate the importance or utility of this project. DFIQ scenarios, facets, and questions are key ingredients used in incident response playbooks, and to have them organized and publicly available is an asset to the DFIR and cybersecurity communities.

Exposure management via CISA’s Top Routinely Exploited Vulnerabilities Permalink

This is textbook example of the type of input you’d apply to an exposure management process:

  1. Take the CISA list, along with others, and overlay these vulnerabilities atop your attack surface. The resultant list are your most at-risk assets.
  2. Remove any assets where you have a strong mitigating control in place.
  3. Patch or otherwise mitigate these vulnerabilities on these assets (really, patch them all, and then consider further mitigations, should the class of attack reappear in the form of another vulnerability down the line).

Note that CISA has produced these lists for three years, but there are related lists (some being the product of CISA’s work combined with other partners). You can find them here: https://www.cisa.gov/search?g=Top%20Routinely%20Exploited%20Vulnerabilities

Incidents as a measure of cybersecurity progress

Phil Venables published a helpful collection of ways that risk and cybersecurity leaders can share their successes, ideally on an ongoing basis. His working theory, which I believe is correct, is that we’re not great at this. And as a result, many of our peers only hear from us when things go sideways, which leads to a variety of problems.

His first suggestion is aptly focused on incidents:

The classic case is incidents. Your main moment in the sun might be in the middle of an incident. If successfully detected and recovered from then you will likely get some kudos. But, too many of these and leadership might conclude you’re not doing an effective job. However, you can provide a better perspective if you place these incidents in some context, such as the percentage of incidents experienced vs. incidents that were avoided because threats were thwarted or risks were mitigated. Of course, this can require some degree of subjectivity depending on how you measure it. You could use a regularly communicated set of messages such as: this month our controls stopped 100,000+ phishing emails, repelled 200,000+ perimeter intrusion attempts, deflected 150,000+ malware infection attempts, and so on vs. only reporting the incidents. In the event of a truly sophisticated intrusion or other circumstance then those incidents might be better seen as what they are, that is an unusual event, not an event that is thought to happen upon every intrusion attempt.

Understanding how to think and talk about incidents is critically important. Among other things, incidents are a foundational measure of control effectiveness, continuous improvement, and overall operational workload in security operations.

I’ve encountered this challenge a number of times over the years in organizations big and small. A simplified version of my approach to incidents:

  1. Define “incident”: It should be clear to everyone when an incident occurs. This is important so that you can respond, obviously, but also so that you know when to capture relevant data and information.
  2. Define incident severity levels: A typical model ranges from level 5 (least severe, think “a control prevented delivery of a phishing email”) to level 1 (most severe, think “we have to notify our customers and board of directors”).
  3. Have a simple, repeatable, measured incident management process: Determine where you’ll capture data and information related to incidents, and your workflow from initial documentation, response, and post-incident analysis.

If you do these three things, you’re positioned to respond, measure, improve, and communicate incident-related information. But to gain (and ultimately share!) useful insights, you have to ask the right questions and look at the data in useful ways. A few aspects of incidents that I’ve found useful for gaining insights and reporting include:

  • Overall number of incidents: This is a performance indicator, not a success measure. Think of it as the building block or denominator for most other incident-related reporting. That said, there’s plenty to be learned from the numbers alone. For example, it’s okay to have a lot of incidents, particularly if they’re lower severity and you’re learning from them and making improvements. Conversely, having very few incidents might be cause for concern, as it might be a sign that incidents aren’t being detected or properly documented.
  • Incidents by root cause: One of the most useful data points to capture during post-incident ananalysis are root causes. Repeat root causes aren’t ideal and are indicators that you want to take some preventative action. Repeat root causes are also a signal you aren’t performing and/or following through with post-incident analysis and related findings.
  • Incidents by severity: Again, it’s okay to experience incidents. Specific to severity, you want fewer higher severity incidents over time. For example, if you had 100 incidents in January and 25% were higher severity, having 100 incidents in February where only 10% are higher severity is a sign of progress.

There are many more, and far better examples. But in general, insights gleaned from incidents—trends in particular—are one of the most useful means of assessing operational maturity and making meaningful improvements to any system. As a cybersecurity leader, you can’t get too good at understanding and being comfortable talking about incidents and what they mean to your team and organization.