Hola from the Lone Star State, and Genuine Joe’s coffee shop here in Austin.
I’ve been thinking a little bit about “threat/vulnerability” pairing. You know the drill, go out, get a scan – match the scan data to existing exploits, and voila! You’ve got risk.
Now regular readers and FAIR practitioners know that I don’t believe this exercise gives you risk at all. In fact, in FAIR terms, I’m not sure this exercise does much for finding Vulnerability.
My Assertion To You: The industry loves T/V pairing because it is precise. It looks good on paper, and if you’re a consultant doing it, it looks like you’ve earned your hourly rate. We love The precision of T/V pairing gives us a false sense of accuracy.
PRECISION AND ACCURACY
Note the target image over on your right. In it, we have two groups of shots. One is very precise. The five shots are grouped together well. The other doesn’t have as good of a grouping, but it is very accurate – all shots are in the bullseye. Now obviously, we’d like to be both precise and accurate, but if we can’t be both – then which would you choose?
T/V PAIRING IS PRECISE, BUT NOT ACCURATE
When I say that we love the precision of T/V pairing, it’s because it gives us a degree of certainty in an uncertain exercise (risk assessment). However, if this certainty is inaccurate to the reality of “risk” (or more precisely <sorry> what you and I know as Vulnerability in FAIR) then what’s the point? And my experience is that, save the use of one of those cool, automated Pentest tools – the manual process of T/V pairing isn’t a ton of fun.
So why isn’t T/V pairing accurate? It’s got some problems. Notably Problems From The “Threat Community” Perspective.
No exploit exists in a vacuum. In FAIR, the Frequency of Threat Events (TEF) is dependent on two factors, Contact and Action. If TEF = 0, then we have no risk. Motivation is part of what determines Contact and/or Action on the part of a Threat Community (Jack will not like the fact that I didn’t be more specific here, but I’m keeping it at a high level for the sake of brevity). So the use of an exploit in part depends on motivation.
It also depends on the Capability of the Threat (in FAIR, “TCap”). TCap consists of the Threat Community skills and resources. Now different Threat Communities have different skills and resources. External Technical Professional has a different set of tools and resources available to them than Internal Non-Technical Privileged attackers. Depending on who the most probable Threat Community is, you may not care about zero day exploits, social engineering skills, or if they have the necessary equipment to disrupt communications with some electromagnetic pulse weapon. More succinctly stated:
- Threat Agents may be too stupid to use the exploit
- Threat Agents may be too smart to use the exploit (they have their own, better tools)
- Threat Agents may be too privileged to use the exploit
***T/V pairing cannot account for this uncertainty*** In fact, T/V pairing can screw up our perspective on probability because in it we see “proof” of how vulnerable we think we are.
HOW CAN GET MORE ACCURACY? BY GIVING UP SOME PRECISION!
I’ve mentioned before that FAIR uses the comparison of population distributions (TCap and our Control Strength) to determine how Vulnerable we are to a Threat Community. By using a population distribution and stochastic analysis, you can account for uncertainties. For example:
I’m concerned about the risk surrounding my XYZ application. Now I can get metrics on my Control Strength – I can scan XYZ, hack at XYZ, have good patch/update/SDLC processes wrapped around XYZ, hire the brightest admins around XYZ, get HIDS, etc… I can also cruise known exploits, read my logs, develop other metrics that help in FAIR analysis. However, because I will almost always be uncertain who could possibly hack at XYZ – I have to make an estimate of the skills and resources someone might throw against me. If my Control Strength is very high – I know that I’m going to basically be concerned with only the most technically proficient threat community. Set your TCap at Very High, as well, and plug the rest of the numbers into FAIR and go for it! The resultant LEF numbers will represent the “unknown unknowns.”
For me, the probability that a threat community can overcome my controls is a better picture of true “Vulnerability” (as described in FAIR) than focusing on the possibility that a particular exploit may be matched against a specific weakness in a system. Not only is it more accurate, but it’s a heck of a lot less painful, as well.
NOTE: I’m not saying there isn’t worth to T/V pairing exercises, knowing what attacks are out there, etc… As I said above, automated Pentest tools that do these things for you are tres’ cool. It’s my opinion that even though they call themselves risk management tools and they really aren’t – they are very, very useful in developing metrics for use in FAIR analysis. If you can afford one, buy it!