Accuracy, Precision, And Threat/Vulnerability Pairing

  • Hola from the Lone Star State, and Genuine Joe’s coffee shop here in Austin.

    I’ve been thinking a little bit about “threat/vulnerability” pairing.  You know the drill, go out, get a scan – match the scan data to existing exploits, and voila!  You’ve got risk.

    Now regular readers and FAIR practitioners know that I don’t believe this exercise gives you risk at all.  In fact, in FAIR terms, I’m not sure this exercise does much for finding Vulnerability.

    My Assertion To You:  The industry loves T/V pairing because it is precise.  It looks good on paper, and if you’re a consultant doing it, it looks like you’ve earned your hourly rate.  We love The precision of T/V pairing gives us a false sense of accuracy.


    Note the target image over on your right.  In it, we have two groups of shots.  One is very precise.  The five shots are grouped together well.   The other doesn’t have as good of a grouping, but it is very accurate – all shots are in the bullseye.  Now obviously, we’d like to be both precise and accurate, but if we can’t be both – then which would you choose?


    When I say that we love the precision of T/V pairing, it’s because it gives us a degree of certainty in an uncertain exercise (risk assessment).  However, if this certainty is inaccurate to the reality of “risk” (or more precisely <sorry>  what you and I know as Vulnerability in FAIR) then what’s the point?  And my experience is that, save the use of one of those cool, automated Pentest tools – the manual process of T/V pairing isn’t a ton of fun.

    So why isn’t T/V pairing accurate? It’s got some problems.  Notably  Problems From The “Threat Community” Perspective.

    No exploit exists in a vacuum.  In FAIR, the Frequency of Threat Events (TEF) is dependent on two factors, Contact and Action.   If TEF = 0, then we have no risk.  Motivation is part of what determines Contact and/or Action on the part of a Threat Community (Jack will not like the fact that I didn’t be more specific here, but I’m keeping it at a high level for the sake of brevity).  So the use of an exploit in part depends on motivation.

    It also depends on the Capability of the Threat (in FAIR, “TCap”).  TCap consists of the Threat Community skills and resources.  Now different Threat Communities have different skills and resources.  External Technical Professional has a different set of tools and resources available to them than Internal Non-Technical Privileged attackers.  Depending on who the most probable Threat Community is, you may not care about zero day exploits, social engineering skills, or if they have the necessary equipment to disrupt communications with some electromagnetic pulse weapon.  More succinctly stated:

    • Threat Agents may be too stupid to use the exploit
    • Threat Agents may be too smart to use the exploit (they have their own, better tools)
    • Threat Agents may be too privileged to use the exploit

    ***T/V pairing cannot account for this uncertainty***  In fact, T/V pairing can screw up our perspective on probability because in it we see “proof” of how vulnerable we think we are.


    I’ve mentioned before that FAIR uses the comparison  of population distributions  (TCap and our Control Strength) to determine how Vulnerable we are to a Threat Community.  By using a population distribution and stochastic analysis, you can account for uncertainties.  For example:

    I’m concerned about the risk surrounding my XYZ application.   Now I can get metrics on my Control Strength – I can scan XYZ, hack at XYZ, have good patch/update/SDLC processes wrapped around XYZ, hire the brightest admins around XYZ, get HIDS, etc…   I can also cruise known exploits, read my logs, develop other metrics that help in FAIR analysis.  However, because I will almost always be uncertain who could possibly hack at XYZ – I have to make an estimate of the skills and resources someone might throw against me.  If my Control Strength is very high – I know that I’m going to basically be concerned with only the most technically proficient threat community.  Set your TCap at Very High, as well, and plug the rest of the numbers into FAIR and go for it!  The resultant LEF numbers will represent the “unknown unknowns.”

    For me, the probability that a threat community can overcome my controls is a better picture of true “Vulnerability” (as described in FAIR) than focusing on the possibility that a particular exploit may be matched against a specific weakness in a system.  Not only is it more accurate, but it’s a heck of a lot less painful, as well.

     NOTE:  I’m not saying there isn’t worth to T/V pairing exercises, knowing what attacks are out there, etc…   As I said above, automated Pentest tools that do these things for you are tres’ cool.  It’s my opinion that even though they call themselves risk management tools and they really aren’t – they are very, very useful in developing metrics for use in FAIR analysis.  If you can afford one, buy it!

    Posted on


    1. roodee Jul 23

      In your methodology (and post here) you mention TEF (Threat Event Frequency). I am curious how you define threat. In some places it seems to be defined as a pseudo-event and in others as an object or causal agent. Is it both? If it is the former then TEF sounds superfluous, if it is the latter then what describes the properties of the threat event? Perhaps a more precise question will help. In your opinion, is an asset “threatened” by a person/object or by the possibility of an event? If this were a drawing board I would attempt to graphically communicate the relationship between these concepts and perform a “coherence test” on it, but unfortunately all I have are words at the moment.

    2. Alex Jul 23

      Hi Roodee! Thank you for the thoughtful comments. You know, you’re establishing a track record with me as someone who thinks these things through. I really welcome that.

      Some background on Threats here:

      Threat Event Frequency:

      I believe that a reasonable definition for Threat is:

      Anything (e.g., object, substance, human, etc.) that is capable of acting against an asset in a manner that can result in harm.

      A Threat Event occurs (an asset is “threatened”) when there is contact and action from a threat. Measuring frequency is critical because, as I think you’re suggesting, we’re not so much concerned with possibility as we are with probability (risk being a probability expression, and frequency being a necessary part of determining probability).

      In terms of visualization, I encourage you to follow the TEF link for a breakdown.

      Now I don’t know exactly what you mean by “psuedo-event” – if you mean something like a “false positive” I agree completely. I think of a TEF as something benign as rattling the doorknob, to as malicious as the precursor (or initial stage) of a Loss Event.

    3. roodee Jul 23

      Sorry, it may sound like I am nitpicking or being overly Platonic, but I think clarity on this issue is important. You mention “A Threat Event occurs…”, but how can an “object, substance, human, etc” (based on the inclusion of Threat in Threat Event) *occur*? These objects, according to your definition, don’t occur, they exist don’t they? In one sense you are describing a threat as an object capable of some sort of actual (causal agent here) action. Yet, in another sense you are describing a threat as some sort of occurrence (Threat Event). If we provisionally agree that the former is the most accurate description of a threat then we have to ask what label/term represents and describes the nature of the harm done to an asset. It sounds exceedingly strange to define a Threat Event as a ‘”object, human, etc” that occurs when there is contact and action from an “object, human, etc”‘. Clearly, this doesn’t make the sense we think it should. It brushes very close to tautological. In addition, with this sort of usage, what is the referent of the Threat Agent label? It almost seems redundant and yet we know it is speaking of this “object, human, etc”. Perhaps this is needless philosophical conversation, but in my experience when these terms are not properly defined the system in which they operate suffer from varying degree of implementation and usage challenges.

    4. Alex Jul 23


      Not at all! If I’m not clear, or not helpful, then what use is the blog?

      Let me try to explain myself this way:

      A Threat Event happens when a threat has contact with us and then acts against us. Threat Event is the “verb” that the Threat “noun” performs.

      Thus we don’t define Threat *Event* as an object or human, blah, blah blah, a Threat Event occurs when there is contact and action against us by the Threat.

      TEF is very important for us to measure (the most undervalued metric possible, if you ask me)

      Think of it this way (and that diagram above should help). The frequency with which a Threat Event occurs is dictated by frequency of contact and probability of action. Probability of action is determined by the Threat’s perceived level of effort to act, perceived risk perceived value of the target asset. Note that non-human threats don’t worry about these factors.

      An example: I am a robber (Threat). I approach a car and see that there’s a laptop in it (Contact). I quickly make a determination about Action – do I have the skills to overcome the lock controls, is the value of the laptop enough for me to bother, and what is the probability and impact (risk) to me that I’ll get caught. I touch the door – Threat Event. Now, until I make off with the laptop, the owner of the car does not have a Loss Event (two different things).

      Any clearer? For proper definitions and a look at the taxonomy of risk you might want to download the pdf here:

    5. Gustavo Bittencourt Jul 25

      Hi Alex

      Could you explain the difference between “threat” and “threat agent” in FAIR? I couldn’t see the distinction.

    6. Alex Jul 25

      Hey Gustavo!

      “Could you explain the difference between “threat” and “threat agent” in FAIR? I couldn’t see the distinction.”

      That’s because there isn’t one!

      Seriously, the difference is slight and subtle. The term “Threat Agent” refers to individuals within a threat population – read: threat category. “Threat” used by FAIR folks (and myself above) tends to refer to that threat category we’re using in specific analysis (threat being “anything capable of acting against an asset). We operate at this high level of “category” rather than mentioning a specific agent because we use population distributions to describe where the Threat Agent resides in determining TCap and TEF (our Threat “metrics).

      In case it helps, we tend to break out the Threat Categories into these basic high level descriptions:

      * External Amateur
      * External Professional Non-Technical
      * External Professional Technical
      * Internal Privileged Technical
      * Internal Privileged Non-Technical
      * Internal Non-Privileged Technical
      * Internal Non-Priviledged Non-Technical
      * Malware
      * Force Majeure

    1. What Are You Managing Towards? (And On Disproving “Risk Management”) |

    Leave a reply