The Units of Risk and Learning how to Measure Them!

I’m hoping to cover a couple of things with you today.

First item: Threats, Vulnerabilities, and Controls: Units, Interaction, and Measurement

Jim @ DCS Security is becoming a blogging madman. I really enjoy his blog — it’s among the many that I hope to add to the links section on your right as soon as possible. Unlike my posts, his are short, sweet, and thought-provoking.

Today he asks,

“I would be interested in seeing the security blogsphere’s take on the relationship between Threats, Vulnerabilities and Controls.”

Ask and ye shall receive.

Up till now, we, as a profession, have been trying to work with or around what the folks at the Episteme blog have been talking about today:

I’ve never been a fan of the “reasonable man” test — it sounds too much like the way that many people in information security assess risk. I call it “Potter Stuart Pornographic Risk Assessment Method” — “I don’t know how to define it, but I know it when I see it”. (This is the method that advocates of Donn Parker’s “Due Care” method of information security practice suggest).

We know that there is this thing called “risk.” Whether it’s on Jim’s site or on the SecurityMetrics mailing list, folks are starting to figure out that risk is made up of something, and if we can find out that something and break it down, we can come up with a way to really express and measure it.

Fortunately, Jack Jones started thinking about all this four years ago, and FAIR is the result. FAIR confirms that Jim @ DCS and those good follks on the metrics mailing list are correct — part of risk involves those elements we obsess over.

A visual representation of the relationships in FAIR:

On Threats

Threats contribute in two direct ways. What does any given threat do?

  1. It tries to hurt us (note I said “try” — more on that later). In FAIR we call that Threat Event Frequency (TEF). Think of TEF as the instant: a virus is sent via email, a port scan is launched against our IP space, or a phone call (in social engineering) is made.
  2. When it tries to hurt us, it uses its capabilites (resources, skills) to attempt to cause loss. In FAIR terms, we call that Threat Capability or TCap.

So our “units” for measurement for any threat that we need to be concerned with are the number of times they’re going to try to hurt us (there are factors to measure that, as well) and how well it’s going to do its job. If you want to come up with “units” for how well a threat is capable of doing its job, think of it as a population distribution. In terms of capable threats for external hackers, for example, script kiddies might consist the bottom 3/4 — metasploit mavens the next 3/16 and the final 1/16 are the “uberhackers gone bad” that we all assume are ready to exploit all the noise that our scanners mark in red.

On Vulnerability

Think about it for a second and tell me if this question seems goofy to you:

How vulnerable are we to the vulnerability?

It’s an odd question. Vulnerability, as our brains interpret it, suggests that by default we are in a state in which we are likely to be hurt. However, as we in the InfoSec profession use the term “vulnerability,” the question is valid. Why? Because we’re not always “vulnerable” to a “weakness in a system” — the latter being the common definition of what we mean when we say “vulnerability.”

Still with me? Think of it this way: I’ve got an unpatched Windows 98 (or Red Hat Linux 6, or whatever) box under my desk. It does one thing- – it stores BBQ recipies in a database so I can get my pulled pork on. It has no network connectivity, not even an ethernet card. How “vulnerable” am I to everything a scanner might find? Using the “weakness in a system” definition — I’m terribly vulnerable. However, I don’t feel very vulnerable. Why? Because the biggest threat I can think of is my own coffee cup.

To feel vulnerable I must take into account the capability of potential threats and my ability to resist those actions. Vulnerability must be a function of how capable my threat source is when compared to my ability to resist it (i.e. my controls).

Note that the only place a “weakness in a system” really fits into this equation is when we consider “controls.” If there is a significant weakness then our Control Strength (see below) suffers greatly in comparison to an applicable threat community.

So what are our “units” for vulnerability? It’s a ratio between the strength of our controls and the ability of a threat to overcome them. Though an uberhacker may be able to exploit any number of weaknesses on my unpatched system, I’m not really vulnerable to him: I lack connectivity (a control in itself), I am physically distant from him, and the value of my data is nil. Now my coffee cup, on the other hand….

Note that an attacker has to “try” — and we have to be “vulnerable” to them in order to have a loss event.

On Controls

The FAIR whitepaper dedicates an entire section to controls. I won’t duplicate what it says here, but I will mention this: Controls are either preventative, detective, or aid in our ability to respond to an incident. Now, this weekend there was quite a bit written about IDS/IPS and the value of that technology or process. I don’t have much to say about that particular technology except the following:

Think about a threat source that IDS/IPS is supposed to help us defend against. When you think about their capabilities and the resources at their disposal, think about them in terms of the above TCap population distribution of bad guys — are they in the top 1/4, middle 1/2, or bottom 1/4 of the distribution when it comes to their abilities?

Now think about an asset that we’re expecting IDS/IPS to protect.

Now, consider what effect IDS or IPS will have (in terms of prevention, detection, and response) against that particular threat source and their toolset/capabilities — in how it will contribute to an overall Control Strength distribution (the “units” paragraph, below). That’s the impact (and value) that IDS/IPS has. Typically, I’m guessing that IDS/IPS contributes to preventing/detecting to threats in the bottom 3/4 of the capability population, but only aids us in response efforts (network forensics) against the top 1/4.

The “units” we’re looking for represent a guage of the strength of my aggregate controls. Again, think of a population distribution. If we hooked that recipie server up to the Internet raw and in the wild, we’d be at the very bottom of control strength. If, on the other hand, our controls are on the magnitude of a large-budget financial institution, we might be in the top 1-5% of the population.

I hope this helps. Note that we’re only covering a portion of 1/2 of the “risk equation.” The factors covered today only contribute to the probability of loss, not the magnitude of that loss.

Second item – RMI’s Having a Training!

Does this sort of thing (above) sound fun to you? Does it sound very interesting, but you’re not sure how you would use it in the real world? Would you like to turn yourself or your analysts into Risk Ninjas?

Then RMI’s Basic Risk Analyst Training — yes, that’s BRAT :) — is for you! We’re going to be holding a two day training on December 11 & 12 of this year in Columbus, Ohio. I guarantee that if you apply BRAT training — it will change your approach to security, your job, and risk — making you substantially more effective!

The training will feature not only the concpets of risk and what makes it up; it will also include a practical course based on how Fortune 100 financial institutions use FAIR in the security portfolio.

We’ve tried to keep training in the neighborhood of SANS/Technical Control pricing — its list price is $2,500 — but we’re offering great discounts to those of you who are members of ISSA, ISACA, Infraguard, or CUISPA. If you have budget money to “spend or lose” before year’s end — this is a great way to see significant value for the investment that money can bring.

More information is here. I hope to see you there!

W00T!!!

Well how about that?

Happy Friday!

This year’s Cardinal team will make for some really great statistical analysis. It’s these sorts of outliers from which useful research is born.

Speaking of birthing useful research – I don’t know if you’ve seen it, but Rattle is a front end to “R” and has some great promise.

Speaking of great promise – I really wish Wendy would write more. And yes, I was thinking that before I saw that she referenced FAIR :)
http://layer8.itsecuritygeek.com/index/layer8/everything-you-know-is-wrong/

Also, Notice Bored celebrates the 3000th ISO 27001 certification. The good news about that is that there are 3000 companies that actually funded such an effort…

State of the Security Market

Lots said on Sourcefire IPO and the Counterpane buyout. Richard of TaoSecurity is dead on about Sourcefire.

I’d like to beg your indulgence and weigh (wade?) in myself. First, the market for security just can’t be as big as all the small product vendors we see every month in our free subscription magazines. In fact, Richard is SO correct about migration to the switch that we could write a whole article on the implications of that fact until we’re blue in the face. I’ll leave you with this thought for discussion:

– IOS, and Cisco’s ability or inability to leverage it, might have more impact on business computing for the next 5 – 7 years than Windows or UNIX/Linux.

Sourcefire will not be a Counterpane. They won’t be an ISS, but they won’t be a “firesale”. Simply stated, Sourcefire is a product company – Counterpane was a human company. When considering market value – what those that matter consider is ability to acquire the existing clients customers. Sourcefire, through the installed base of it’s technology (whether or not SNORTies are buying any sort of contract from Sourcefire) makes it difficult for a party interested in increasing it’s presence in the market to acquire their customers. In other words, Sourcefire’s value is tied to it’s installed base because of the difficulty of ripping out a box from the rack, retraining your staff on a new box, and going through the toenail-pulling-out pain that can be the process of vendor evaluation.

Counterpane, however, was a “human” company. Competing with Counterpane meant not replacing boxes so much as it meant the cost of acquiring their clients. Every year or three, Counterpane has to make the sale again.

Finally, there really wasn’t much innovation at Counterpane. Despite wonderful folks like Marcus Ranum and Bruce – they never really translated those advantages to a differentiator. Wow, you’re a Mastercard SDP vendor? You and every other security company with $10k for the marketing fee. Vulnerability Scanning, Penetration Testing? Sweet! But the client you have this year is actually *more* inclined to switch vendors for those services when the need arises again. Let me let Counterpane explain the problem themselves:

X Rich heritage in managed security
X Real, global economies of scale and cost
X Ability to monitor any critical asset at customers’ request
X Compliance with data privacy mandates
X Providing objective, critical advice on key security issues
X Commitment to delivering the most robust security infrastructure
X Ensuring Counterpane customers the best possible defense

The only thing on this list that someone can’t replicate? That rich heritage bit (and even then one might argue that they need to drop the word “managed”).

Anyway, Counterpane really are good folks, and I wish them all the best at BT. Maybe BT’s cash resources will float them to build the next great thing. Marcus certainly has the ability.

Risk Tolerance

From Securosis / DCS Security discussion of risk and best practices ( which is a lot of fun and, as I understand it the author of Securosis is a Gartner analyst so he’d ought to know):

“Analyst best practices will make you really fracking secure, but probably cost more than a CEOs parachute and aren’t always politically correct.”

This, my friend, within the context of your advanced FAIR training (which you’ve had, right?) is what you’ll recognize as misjudging the “risk tolerance” of an organization.

One can over-estimate the risk tolerance of senior management, and be told that you need to revise your budget expectations.  One can also under-estimate the risk tolerance of senior management, and find out about it the hard way.

At the end of the day, it makes sense to ask management for it’s risk tolerance.  Uncomfortable, yes, but it beats not knowing. Plus there’s all those benefits of a mutual understanding of what risk means and the actions you’re expected to take.

FYI, the answer folks seem to be looking for might be that proper risk management is bestest practices.

Australia Waterfowl, Philosophy, and Zero Day Events

 What do philosophers pontificating about swans have to do with risk management? Sometimes everything.

Peter Lindstrom asks if Freak Accidents are Black Swans? Good thought provoking question!

Let’s consider what a "Black Swan" is…

“No amount of observations of white swans can allow the inference that all swans are white, but the observation of a single black swan is sufficient to refute that conclusion.” (John Stuart Mill rephrasing David Hume)

That comes to us from Nassim Nicholas Taleb’s wonderful book, "Fooled by Randomness." To Taleb, a Black Swan is a large-impact hard-to-predict rare event beyond the realm of normal expectations.

Some time after Mill (and Hume) wrote about white swans and black swans – the Cygnus Atratus (pictured above), an Australian swan, was discovered. Bravo for life’s little ironies!

Back to Lindstrom, a those freak accidents – the category, are not "Black Swans". In other words, we have a population distribution of 300 million in the U.S. of A, six billion in the world, and accidents such as these are "bound" to happen. In addition, from the examples that come back from Peter’s Google search, I would have a tough time describing a plane crash, death by animals protecting themselves, and as unfortunate as it is – a toddler dying from an accident – as "Black Swans". These are all things that we know are within the realm of probability.

Black Swans and Zero Day

As Mike Rothman points outthe definition of Zero Day is becoming a hot topic. Alan Shimel’s (rightly) suggested refinement of nomenclature concerning Zero Day. He adds the category "Less than Zero Day".

Whether you agree or disagree with Alan’s definitions and approach – he, like all of us, are really trying to account (somehow) for "Black Swans" – weaknesses and exploits that exist but are not common public knowledge. If there’s a weakness and we (the "white hats") know about it – even if there’s no patch available it is not beyond the realm of imaginable probability that it can be used against us. Thus, those are not "Black Swans".

Nitpicky Note: Alan’s use of "Risk" in his graph. Nobody knows true impact of a "Black Swan" or Zero-Day for a particular exploit on a particular set of systems yet – so it’s difficult to judge "Risk to the Organization". One might run FAIR models using Force Majeure and/or Technical Expert Threat Communities, an LEF of "1" and assume worst case losses if you wanted to model potential impact (you could even do what my friend Clarke Cummings suggests and apply Monte Carlo methods to ranges of losses). But we simply can’t assume that all black swans will result in worst case.

See? Billy Idol Gets It, I Don’t Know Why She Doesn’t!

The title today is my favorite line from Adam Sandler’s “The Wedding Singer”.

I read a really good post this morning on the Information Security Is Not an Oxymoron weblog.  Bill  P. there “gets it”.

The things that have always bugged me is that with stove-piped or otherwise narrowly defined organizational structures, the technology guys & gals are not really that cognizant of what they are actually supporting. Sure – if you ask them what they “do”, they will say: “I’m a Network Engineer”, or “I’m a Security Analyst”, or “I’m a Software development Manger”. What they all fail to indicate is their relationship to the Company.

What they actually should focus on is that relationship. When asked – the answer should always be: “I’m here to support the Business in meeting their goals within their requirements”. Anything you do to support that is a function of your job, but it’s not your job.

Wonderful, eh? I’d have to say that if we find ourselves guilty of the above – part of the blame also lies at the feet of the organization. As much as we tend to “silo” ourselves – businesses tend to silo IT (and IT silos Infosec).

However, it’s naive to wait around for the organization to change and “de-silo” us. It’s on our shoulders to be relevant to business. How? My suggestion is stop feeling like we need to “control” the use of Information Technology and simply become schooled in expressing risk. This involves a really strong S,C,&A process – but hey, if the doctors want wireless palmtops or tablet PCs, great! Just as long as we explain in something like the following:

“This is a high risk proposition (and here’s the defensible study to back that qualification of risk) and you should expect to lose X amount of money every six months. OR you could invest in these controls – and live with their inconvenience, reducing your risk by several factors.”

Yes. I’m advocating transferring the political risk of IT decisions to business owners.

How to “Cheat” at Best Practices and Win the Game

(Our image today is of Kenny Rogers pitching last night in the World Series. Note the foreign substance -likely pine tar- on his pitching hand. For those who don’t know; pine tar, when properly placed on a baseball, “enhances” the ball’s movement and is considered against the rules. Is there the equivalent of pine tar for our firewall administrators?).

DCS Security and I have been talking a little metrics, statistics and Moneyball. A quick story for you:

I actually was in a discussion once with a very smart person who was huge on “best practices” instead of risk management. He actually mentioned (he was from Europe) how sports teams do well from year to year following “best practices” as an example of their usefulness. I laughed and suggested that this individual read the book, “Moneyball”.

Doing More with Less by Challenging Best Practices

If you haven’t read “Moneyball” and you’re a sports or baseball fan, you really should. Moneyball is a book about how the Oakland Athletics baseball team’s management found “market inefficiencies” and have used them to successfully compete against teams with 4 to 5 times the payroll.

A really incredible article on Moneyball, statistical analysis/modeling, and business risk is here.  Please read and come back.
Are you back now?  Good.  What I find interesting is that the General Manager of the A’s, Billy Beane, essentially had to train his scouting staff to ignore decades of “best practices”, and utilized metrics (successful use of statistics) to become incredibly effective. Among the “best practices” Beane ordered his staff to:

  • Ignore a prospect’s athleticism and focus on “baseball skills”. Best Practice for scouts was to look for raw athleticism and hope that baseball skills could be taught. When Beane explained that he’d rather have a slightly overweight player who could take a walk and hit for power, he famously explained “We’re not selling jeans here”. He used probability modeling to identify high risk players (who turned out to be the “toolsy players” loved by best practices) and steer clear of them. The same modeling helped him project college players who exhibited baseball skills to create a great farm system (important because the way baseball has structured free agency, young talent is the cheapest).
  • Use metrics to find out exactly what winning teams do well and build a team that demonstrates those abilities. Much like Jack Jones built FAIR from the fundamental basics of risk and ignored all his preconceived notions, Beane’s former boss had a team from Stanford ignore all of baseball legend and lore, and look at how teams won ballgames. Simply stated, by scoring runs. His team looked at how past winning teams scored runs and built a “taxonomy” for offense based on those primary factors. Out go traditional “best practice” statistics like RBI, batting average and stolen bases as primary metrics of a hitters ability – in come On Base Percentage, Slugging Percentage and their derivative OPS, etc…

The result? Well, the A’s have been competitive year after year now with a payroll easily in the bottom 1/4 of the league ($40 million A’s vs. $200 million Yankees). So successful are Beane’s methods that they’ve withstood the losses of key players like Jason Giambi, Johnny Damon, Tim Hudson, Mark Mulder, Jermaine Dye, etc… and remained competitive.

What’s the lesson for us? Best Practice, by nature, is full of inefficiency. Anyone who has ever dealt with a compliance heavy environment knows that they are also full of inefficiencies. Good Risk Management, that is; a good understanding of risk, it’s component factors and metrics built from the taxonomy of those factors, can help organizations identify inefficiencies, and refine their practices to be “better” than “best” (a task made easier these days now that most auditors/regulators/examiners love the “hot” term, “risk”).

And while maybe not every employee needs to perform risk analysis/risk assessment on a daily basis, if you’re trying to build a competitive team that does more with less, it makes sense to make sure that your team knows what is a “win”. In baseball, that’s “score more runs than your opponents”. In security and risk management, that’s a reduction of not just incidents, but loss as well. To do that effectively, it might just help if your players understand what risk is, what relationship vulnerability, controls and threat agents have to risk. Why? So that they can be as effective as possible in how they build and report metrics, categorize incidents and impact, and stop focusing on possibility and understand the probability issue that is risk.

A quick post-script: Initially, when FAIR was first implemented at a large Financial Institution (F.I.), it wasn’t known how much use risk management would be to all aspects of the security group. It was to our happy surprise that an proper understanding of risk benefited engineers in almost every area – from control management to incident response. Turns out that successful Risk Management, at it’s core, involves the interaction of control managers, asset managers, business owners, and risk program management.

What Risk Management Isn’t

You may recall that in my last post I wrote:

“Note that when I say above “basic approaches to Information Risk/Security Management” I’m using the word “management” in its strictest sense — the management of a Risk program, not management of security devices. There’s a big difference there, and people often confuse the two. I’ve found those most guilty of that confusion are the “Instinctive” types.”

Today I’d like to expand upon that statement. In this, I’m going to try to get to the heart of what exactly is Risk Management. Not a dictionary definition per, se – but what it means to have risk management. I think that today it’ll be easier for me to start with what isn’t Risk Management.

First, as I mentioned, Risk Management isn’t management of security devices. Regardless of what a device may do for you. That, I would describe as Control Management.

Next, from what I’ve seen in the security market, Risk Management isn’t the function of any one vendor box or group of boxes. Mike Rothman is absolutely right on in that linked article.  The marketing of such leads a lot of people to be dismissive of vendors and not trust the term “risk management” in general.

Finally, Risk Management isn’t a once a year BIA and Risk Assessment using one of the following: NIST 800-30, OCTAVE, COSO, Basel II etc… Though this approach makes a nice binder for your shelf – it’s about as useful as government compliance, which is to say, it’s another hoop for us all to jump over. Okay, maybe not Basel and COSO – but my point is Risk Management isn’t following a checklist on a periodic basis.

So what is Risk Management? That’s more post than I’ve got time for today, but I’ll leave you with the basic following thought for discussion:

Risk Management happens in an organization when it’s analysts and engineers regularly/constantly consider likelihood and impact. It’s when the mission of a department isn’t just implementing controls – it’s understanding the impact of those controls (or lack thereof) on business development and the ability to effectively express that impact to the rest of the organization.

Let me state that at least by using the last sentence above, I can understand how people can argue for Risk Management as an “enabler”. When compared to the instinctive approach to Control Management that passes for Security Management in many organizations, Risk Management can be thought of as such.

Instinct and Intuition and Risk Analysis

Another great website I recently found is Episteme. Just in the last week, they’ve had two excellent posts:

  1. The Units of Risk, which regular readers and FAIR folks might find a little cursory in its approach to risk. We can’t fault the authors — to me they’re at least showing us that they’re ahead of the curve in their thinking.
  2. Instinct and Intuition, which is a great post and what I’d like to discuss today (it leads to discussion about the Units of Risk).

As I read the latest blogs and books, and talk to various professionals (most of whom are much smarter and more experienced than I), I realize I’m starting to see three basic approaches to Information Risk/Security Management (apologies to Securosis for blending the terms):

  • Instinctive. Episteme uses the following dictionary definition: a natural or innate impulse, inclination, or tendency.
  • Intuitive. Episteme uses the following for “intuition”: direct perception of truth, fact, etc., independent of any reasoning process; immediate apprehension.
  • Empirical. I’ll use the following definition: “based on, concerned with, or verifiable by observation or experience rather than theory or pure logic.” In real life terms, I think of an approach driven solely by metrics and/or checklists (the ISO, COSO, Basel II, etc..).

What We’re All Trying to Accomplish

Before we delve into the three approaches to Risk Management, I’d like to talk about why we do what we do. The reason for the three approaches is simple: self-justification. Risk Management, whether you see it as a strict cost-center or as a business “enabler,” has an issue. It requires significant resources — whether at an enterprise or SMB level. Complicating matters is the fact that we professionals tend to run paranoid — a justified paranoia that drives us constantly to ask for more and better resources. When you find the CISO that says, “You know what, we have enough resources to do our job,” let me know. It’ll be a first for me. The whole reason we go through these exercises and use the above approaches is to justify our existence, to convince upper management that they need to give us money.

Note that when I say above “basic approaches to Information Risk/Security Management” I’m using the word “management” in its strictest sense — the management of a Risk program, not management of security devices. There’s a big difference there, and people often confuse the two. I’ve found those most guilty of that confusion are the “Instinctive” types.

The Instinctive Approach to Risk Management

The instinctive approach relies on our “natural or innate impulse, inclination, or tendency.” It relies, in other words, on our paranoia. We are (rightfully) scared of a breach in C, I, and/or A. Instinctive folks are generally the more technical — hackers in the truest sense of the word, very knowledgeable about how easy it is to break or bend technology. This knowledge drives the paranoia. When faced with “management” of risk, they tend to do one of two things:

  • Focus on the possible (it’s possible that a hacker will use a zero-day exploit on our executive’s laptop while he’s at Panera Bread, so no more laptops) and use FUD to try to beat the acknowledgment of justification out of executives. As Jack is fond of saying, we’re like shamans. We need to sacrifice chickens or the Thunder Gods (read: “hackers”) will get us, so give us more chickens please. This approach works only for a while, and believe me, it’s tiresome to executives. In fact, the only time I see this approach really work is once there’s been a big incident, or when there’s a Merger/Acquisition on the horizon. And then it’s only a temporary success, the M&A occurs, or the incident becomes past history in the mind of the executive council and so goes political viability.
  • Become either Intuitive or Empirical.

What’s the impact on our ability to achieve Justification? Using the instinctive approach limits the value of Information Risk/Security departments to the paranoia of the organization.

The Intuitive Approach to Risk Management

The intuitive approach is taken least often by regular practitioners. It is the most immature approach, and it’s starting to frustrate most CISO’s that I’ve talked to. An intuitive approach involves using data to support our decisions and justifications — those decisions and justifications that have already been made, given the experience of the CISO/Manager. In other words, it uses our internal “blink” and seeks justification for that blink. The problem with this approach is that, by definition, it involves “perception of truth, fact, etc., independent of any reasoning process.” Executives and regulators alike want to see, above all else, a reasoned process. Absence of a reasoned process leads intuitive types to either regress into the FUD of instinctive, or seek something else. Some of these folks move right on to Empirical.

What’s the impact on our ability to achieve Justification? Using the Intuitive Approach limits the value of Information Risk/Security departments to the political viability of the Risk/Security manager.

The Empirical Approach to Risk Management

The Empirical category I’d like to separate into two camps — the “Metrics” Geek (and I mean “geek” in the most complimentary sense) and the “Checklisters.” The metrics approach is a kind of new trend or fad that hasn’t really caught on too much but I really think you’ll hear much more about it over the next 36 months.

First, let’s talk about The Checklister. He’s like the biblical pharisee: for him, righteousness (in our world, security) is obtained by adhering to a strict regimen of processes. Adherence to that regimen of processes is obtained by adherence to yet another regimen of processes, and so on. We are secure because we follow the ISO, which feeds data into our ERM approach (COSO, maybe) which is all nicely documented. Once a quarter a cross-discipline team from IT Security, Risk Management and Internal Audit all run around and do surveys/interviews with stakeholders and either you get a check mark or you get a “finding.” The number of “findings” drives justification for resources. Some people say that IT in Europe is very checklist driven.

The new Metrics Geek has an interesting approach. It combines the entrenchment of Checklisting with quantification of how well those process are done. And while I think this is a better approach than pure Checklist driven management, it’s doomed to the same failures in justification.

The problem with the Empirical approach is that your checklist is your carrot. You’re still on the “hamster wheel of pain” (using the J Peterman Threat Levels — what a great post that is) despite any use of “key indicators.” If you’re doing poorly on your checklists, you’re either going to get the resources you ask for (with our without a verbal slap on the wrist) or you’re going to get fired. If you’re doing well with your checklists, what justification do you have for additional resources? Well, you default to “FUD” — using Gartner or similar analysts and “best practices” to tell your organization what resources it should be spending – “I need Norton Whizzbang because Gartner says all our competition will have Whizzbang.” As Jack likes to say, We need to sacrifice chickens to the Thunder Gods because the tribe up the river is sacrificing chickens – you don’t want the Thunder Gods to be happier with them than us, do you? Give us more chickens, please.

What’s the impact on our ability to achieve Justification? Justification using the Empirical Approach limits the value of Information Risk/Security departments to what value compliance has to senior management.

A Blended (Reasoned?) Approach to Risk Management

I absolutely love this quote by Dan Geer:

“The only security metrics we are interested in are those that support decision making about risk for the purpose of managing that risk.”

Why this quote works is because if we use quantified (or even qualified) Risk as our justification, we tie our value to the value of the business.

Unfortunately, we know that the factors that comprise risk, though now easy to identify, are difficult to gather data on. Think about the daunting task of identifying threat communities. Now think about the pain of identifying potential “relevant” threat communities. Our “blink” works at first, but then we tend to focus on instinct, or “the possible” and things degenerate quickly — especially if you’re working in groupthink mode.

Many people have been talking about the “Units that comprise Risk” (see the Episteme link, above). If we can quantify these “units” we can start expressing justification based on the value of the business. What are these units?

I believe these “units” should be metrics and/or intuitive estimates of the Factors that comprise Risk. You’ll recall my assertion that the problem with the Intuitive approach is that it relies on, “perception of truth, fact, etc., independent of any reasoning process.” FAIR is a reasoned framework. It drives objectivity into the subjectivity of the practitioner “blink.” It’s also pretty dern universal — FAIR can be used for IT, physical, legal, environmental risk, just about any type of risk you’d like to measure!

A blended approach would use a framework like FAIR to provide categories (potential losses and probable frequency of loss) of risk factors. Metrics are developed to give numerical meaning to these categories (the sought-after “units”). Checklists are built to make sure that the processes in place are regularly providing either good metrics for the categories, or that risk analysis is happening in the right places of the management processes for decision making to occur.

A blended approach uses the best of practitioner instincts and intuition (especially when using FAIR to drive objectivity) and combines the right aspects of Empirical (observation and data) without ignoring beneficial theory and logic, as they allow us to understand the relationships between risk factors.

The result of the blended approach is that we can explain our need for resources as a function of probability of loss. When senior management has the right expectations of what those models mean, justification is easy (Insurance company execs are already very versed in making decisions in this sort of manner). The right framework means that if there is a difference of opinions about the probability of loss — Risk Management’s conclusions are defensible. The onus is now upon the executives to question either the estimated frequency of loss or loss magnitude, and to prove which metrics or estimates they believe are at issue.

Cool Cleveland Conference

The 2006 High Technology Crime Investigator’s Association (HTCIA) Training Conference and Expo, is being held in Cleveland, Ohio, October 30, 2006 through November 1, 2006. This year’s event is our 20th anniversary of being a non-profit professional organization devoted to the prevention, investigation, and prosecution of high tech crime. HTCIA has over 3,000 members through out the world. Attendees are registering from all of the world for this important training event. (http://ohiohtcia.org/conf_main.html)

Keynote/Lunch Speakers from MySpace, U.S. Dept of Justice, and the Brazilian Forensic Computer Crime Unit. We have five rooms devoted to breakout sessions and seven rooms devoted to hands on computer labs. Here is just a sample of the topics and classes:

Artifacts of Deletion Utilities
Cell Phone Forensics
Network Crime and Network Intrusions
Internet Browser Forensics
Linux/SMART Enterprise forensics
ProDiscover Basic Freeware Lab
Access Data FTK 2.0 Technology
Investigation the Usenet Tips and Tricks
Mac Forensics
Google as an Investigative tool
Forensics on “Live” Running Networks and Systems
Wireless hacking and Cell Phone Forensics
Inside Illegal World of the WAREZ
Tool Shootout for Cell Phone Forensics
AOL Forensics
Detecting and Collecting Whole Disk Encryption Media
Access Protected Registry Forensics
Ultimate Boot Disk CD for Windows
Investigating Wireless Devices
Steganography Investigations
The Handheld – The next hacker workstation
Tripping over Borders in Cyberspace – Legal Issues
Introduction to Malicious Software Analysis (Windows)
AccessData Rainbow Tables
Guide For Handling Cyber-Terrorism And Information Warfare
Advanced Unicode and Code Page Keyword Searching
Moble IP, Secure Portable Metro Networks
Digital Crime Scene Forensics
Cyber laundering Informal Value Transfer systems
Electronic operations traceability. A challenge for IT Managers
Dissecting The Stream, IP forensics
Cell/Mobile Phones: The Good, the Bad, the GSM
Volatile Data collection from Running Windows Machines
Bypassing the Best Laid Plans: How They Steal Proprietary Information
Fuzzy Hashing- Matching similar documents
Proactive Forensics: The Data Before it Goes Bad
Advanced Unicode and Code Page Keyword Searching
Instant message Forensics
Detecting and Extracting Steganography
Using Back Track to Compromise a Network
CyberCrime in Brazil
Anti-forensics
Using Google Desktop in forensic Investigation
Handheld Forensics: Cell Phones, PDAs, and Hybrids
Google Hello, Access Data Password Cracking
The turtle tool – Peer-to Peer Investigations
Maresware Tools
Legal Discovery and Redaction Issues
Benefits and Risks of Undercover Internet Investig
Moving from LE into the private sector
Legal Issues in Civil Trials
Network forensics in the digital world
Benefits and Risks of Undercover Internet Investigations
Proactive Online Investigation
Artifacts of Deletion Utilities
Malicious software & Steganography Investigations
TCP/IP Protocol Analysis
Hacking with iPods and Forensic Analyst
Victims of Internet Crimes
Dissecting The Stream, IP forensics