I’m hoping to cover a couple of things with you today.
First item: Threats, Vulnerabilities, and Controls: Units, Interaction, and Measurement
Jim @ DCS Security is becoming a blogging madman. I really enjoy his blog — it’s among the many that I hope to add to the links section on your right as soon as possible. Unlike my posts, his are short, sweet, and thought-provoking.
Today he asks,
“I would be interested in seeing the security blogsphere’s take on the relationship between Threats, Vulnerabilities and Controls.”
Ask and ye shall receive.
Up till now, we, as a profession, have been trying to work with or around what the folks at the Episteme blog have been talking about today:
I’ve never been a fan of the “reasonable man” test — it sounds too much like the way that many people in information security assess risk. I call it “Potter Stuart Pornographic Risk Assessment Method” — “I don’t know how to define it, but I know it when I see it”. (This is the method that advocates of Donn Parker’s “Due Care” method of information security practice suggest).
We know that there is this thing called “risk.” Whether it’s on Jim’s site or on the SecurityMetrics mailing list, folks are starting to figure out that risk is made up of something, and if we can find out that something and break it down, we can come up with a way to really express and measure it.
Fortunately, Jack Jones started thinking about all this four years ago, and FAIR is the result. FAIR confirms that Jim @ DCS and those good follks on the metrics mailing list are correct — part of risk involves those elements we obsess over.
A visual representation of the relationships in FAIR:
Threats contribute in two direct ways. What does any given threat do?
- It tries to hurt us (note I said “try” — more on that later). In FAIR we call that Threat Event Frequency (TEF). Think of TEF as the instant: a virus is sent via email, a port scan is launched against our IP space, or a phone call (in social engineering) is made.
- When it tries to hurt us, it uses its capabilites (resources, skills) to attempt to cause loss. In FAIR terms, we call that Threat Capability or TCap.
So our “units” for measurement for any threat that we need to be concerned with are the number of times they’re going to try to hurt us (there are factors to measure that, as well) and how well it’s going to do its job. If you want to come up with “units” for how well a threat is capable of doing its job, think of it as a population distribution. In terms of capable threats for external hackers, for example, script kiddies might consist the bottom 3/4 — metasploit mavens the next 3/16 and the final 1/16 are the “uberhackers gone bad” that we all assume are ready to exploit all the noise that our scanners mark in red.
Think about it for a second and tell me if this question seems goofy to you:
How vulnerable are we to the vulnerability?
It’s an odd question. Vulnerability, as our brains interpret it, suggests that by default we are in a state in which we are likely to be hurt. However, as we in the InfoSec profession use the term “vulnerability,” the question is valid. Why? Because we’re not always “vulnerable” to a “weakness in a system” — the latter being the common definition of what we mean when we say “vulnerability.”
Still with me? Think of it this way: I’ve got an unpatched Windows 98 (or Red Hat Linux 6, or whatever) box under my desk. It does one thing- – it stores BBQ recipies in a database so I can get my pulled pork on. It has no network connectivity, not even an ethernet card. How “vulnerable” am I to everything a scanner might find? Using the “weakness in a system” definition — I’m terribly vulnerable. However, I don’t feel very vulnerable. Why? Because the biggest threat I can think of is my own coffee cup.
To feel vulnerable I must take into account the capability of potential threats and my ability to resist those actions. Vulnerability must be a function of how capable my threat source is when compared to my ability to resist it (i.e. my controls).
Note that the only place a “weakness in a system” really fits into this equation is when we consider “controls.” If there is a significant weakness then our Control Strength (see below) suffers greatly in comparison to an applicable threat community.
So what are our “units” for vulnerability? It’s a ratio between the strength of our controls and the ability of a threat to overcome them. Though an uberhacker may be able to exploit any number of weaknesses on my unpatched system, I’m not really vulnerable to him: I lack connectivity (a control in itself), I am physically distant from him, and the value of my data is nil. Now my coffee cup, on the other hand….
Note that an attacker has to “try” — and we have to be “vulnerable” to them in order to have a loss event.
The FAIR whitepaper dedicates an entire section to controls. I won’t duplicate what it says here, but I will mention this: Controls are either preventative, detective, or aid in our ability to respond to an incident. Now, this weekend there was quite a bit written about IDS/IPS and the value of that technology or process. I don’t have much to say about that particular technology except the following:
Think about a threat source that IDS/IPS is supposed to help us defend against. When you think about their capabilities and the resources at their disposal, think about them in terms of the above TCap population distribution of bad guys — are they in the top 1/4, middle 1/2, or bottom 1/4 of the distribution when it comes to their abilities?
Now think about an asset that we’re expecting IDS/IPS to protect.
Now, consider what effect IDS or IPS will have (in terms of prevention, detection, and response) against that particular threat source and their toolset/capabilities — in how it will contribute to an overall Control Strength distribution (the “units” paragraph, below). That’s the impact (and value) that IDS/IPS has. Typically, I’m guessing that IDS/IPS contributes to preventing/detecting to threats in the bottom 3/4 of the capability population, but only aids us in response efforts (network forensics) against the top 1/4.
The “units” we’re looking for represent a guage of the strength of my aggregate controls. Again, think of a population distribution. If we hooked that recipie server up to the Internet raw and in the wild, we’d be at the very bottom of control strength. If, on the other hand, our controls are on the magnitude of a large-budget financial institution, we might be in the top 1-5% of the population.
I hope this helps. Note that we’re only covering a portion of 1/2 of the “risk equation.” The factors covered today only contribute to the probability of loss, not the magnitude of that loss.
Second item – RMI’s Having a Training!
Does this sort of thing (above) sound fun to you? Does it sound very interesting, but you’re not sure how you would use it in the real world? Would you like to turn yourself or your analysts into Risk Ninjas?
Then RMI’s Basic Risk Analyst Training — yes, that’s BRAT — is for you! We’re going to be holding a two day training on December 11 & 12 of this year in Columbus, Ohio. I guarantee that if you apply BRAT training — it will change your approach to security, your job, and risk — making you substantially more effective!
The training will feature not only the concpets of risk and what makes it up; it will also include a practical course based on how Fortune 100 financial institutions use FAIR in the security portfolio.
We’ve tried to keep training in the neighborhood of SANS/Technical Control pricing — its list price is $2,500 — but we’re offering great discounts to those of you who are members of ISSA, ISACA, Infraguard, or CUISPA. If you have budget money to “spend or lose” before year’s end — this is a great way to see significant value for the investment that money can bring.
More information is here. I hope to see you there!