Around The Web For Friday

We’re frequently asked what we’re reading and what we like in blog posts, so here are some interesting things that hit our RSS readers that you may have missed:

COBIT rivals ITIL from The IT Skeptic

“Everyone is tiptoeing around the fact that COBIT offers a significant competitive body of knowledge (BOK) to ITIL. Sure ITIL goes into more depth in places, but to say COBIT sits over the top is to grossly understate the overlap. COBIT extends a long way down into the “how” and it does it with an intellectual rigour that ITIL lacks.”

Interesting stuff that.  A detailed mapping might help some folks.  Either way, the good news for those keen on understanding risk management is that governance metrics, done right, allow us to understand a part of that “capability to manage risk” we’re always looking for.   Assurance, verification and the acquisition and interpretation of knowledge is king.   Speaking of which….

How To Tell When “Nothing Happens” by Pete Lindstrom

“…problem is that, it isn’t really true that “nothing happens” when you employ some specific security control to prevent an exploit. Not only that, but even when it is difficult to collect data on what didn’t happen, one can devise experiments to tell how frequently that nothing occurred.”

Good analysis is all about the uncertainty.   Speaking of accounting for uncertainty…

Assets Good Until Reached For by Gunnar Peterson

“If you have a 100,000 dekstops or 100,000 servers it hard to manage. You will need to automate and to do that you need to abstract, but you should also realize that its a drawing on a whiteboard not reality. You need abstraction assurance.”

And there’s the trick.  We might call “abstraction assurance” an analog to “confidence” or “uncertainty” in certain priors (metrics) or posteriors (calculated values based on those metrics).  The stronger that abstraction assurance is, the less uncertainty we have in our knowledge and the better our ability to create wisdom from that knowledge (you know, make decisions).

Epstein, Snow and Flake: Three Views of Software Security by Adam Shostack

Adam’s focus is on software security, but the discussion here can be abstracted out into the broader realm of risk management quite nicely.

Two-thirds of firms hit by cybercrime from Security Focus

The US DoJ says that in 2005 (there’s some timely data) 2/3 of their surveyed firms detected at least one cybercrime.  “Cybercrime” is “classified … into cyber attacks, cyber theft, and other incidents.”  Pretty general.  Also from the report:  “Computer viruses made up more than half of all cyber attacks.”

(That sound you hear is me tapping my forehead lightly on large iron object)

Lessons Learned from “Personal” Risk Management By: Christopher Daugherty

“This process is what I call “personal risk management.”  All of us have done it and will continue to do so.  Why is it, then, many companies have ignored following similar principles with the on-going health of the business?  This is a debate with many different answers so I ask you to select the best answer for your employer:

a) Have not ignored as this keeps me awake at night!

b) Please restate the problem, I cannot hear well with my head buried in the sand.

c) We passed our SOX audit so we checked this off the list!

d) We are informed of the challenge but we have a business to run and profits to make

e) Is this what internal audit and risk management has been telling us?”

One Man’s Frustrations With “Risk Management”

Chris, who is a male in Government C&A has a blog with a wonderful title: How is that Assurance Evidence?

I’d love to have another blog even more specific – “Ok, that Assurance is Evidence Of What, Exactly?

Today he has a great article called:

What’s the matter with Risk Management?

And “in short, it’s everything.” It pretty much sums up why I had to grow to re-evaluate how our industry does risk, risk management, approaches controls & vulnerability and find a new way.   A couple of things jump out at me in reading Chris’ article:

1.)  Just because that Deming cycle sucks and is full of unknowns doesn’t mean “risk” doesn’t exist, nor that it isn’t of primary importance. Nor does it mean that in the absence of model & methodology, we won’t be “doing” risk analysis anyway – just in an ad hoc method and completely from “the gut”.

Our industry calls these unstructured risk analysis “Best Practices”, as it’s an easy and convenient way of sweeping the unknowns under the rug of bureaucracy and enforcing it via peer pressure.

2.)  What this “suckiness” does mean is that your model and methodology aren’t helping you. As Chris intimates, there is too much uncertainty in the inputs for his model (they are, in the language of Bayesians – too subjective to be useful priors).

Take for example how we might be approaching the “controls” part of our analysis.  Chris writes:

“2. What are the controls that we have to employ?
800-53, ISO 27001, PCI, etc.

Still kinda good, but we basically know that ISO is relatively voluntary and NIST supplies a control catalog and not policies. So here we have to take the control catalog, and mash our policies into it.”

I wouldn’t call this “kinda good” at all :)   These control catalogs only provide a hierarchy within which to look for evidence of  our ability to resist an attacker.  They are incapable of making any claim about the effectiveness of the controls when they are operated at 100% efficiency, or more importantly, what % efficiency our specific organization operates at.

Let’s use Chris Hayes’ Initech as our fictional example.

Initech has a control (a back door on a loading dock).  Now the locks on the door are 100% capable of locking the door.  This is different than saying that they are capable of frustrating all but the top 5% of lockpicking burgalars.  It is also diffferent than saying that in a sample of several “walk around audits” the doors are left open 20% of the time (they are not in compliance with policy 100% of the time).  Even worse, that 80% of the time the door is not propped open?  Yeah, tailgating is a known issue.

So we have several different variables here that we need to account for (and it’s just a door).  But the analogy stands that most “risk management” methodologies are “We have a door, yes/no?” And most GRC platforms, when asked for their “opinion” will simply say “door is needed” or, even worse, “a door policy is needed”.

3.)  Criticality and the Source of Value is all messed up in these Risk Management models.

Chris writes:

Someone wants me to tell them which boxes are more critical than others. This is mainly because of budgetary or operational reasons. To which I usually say “All of them, it is a system after all”.

This literally made me laugh out loud.  And this sort of “rate the firewall as Risk = 500 but rate the actual business application as Risk = 157″ thing is also endemic.  Now Chris is very smart here.  He correctly identifies that the value is tied to the business process the systems support, and not to a specific box.  Oh, we scan at the specific box level – but because of the nature of systemic failures – all the boxes in the process are inexorably interrelated.

One of the reasons I really like FAIR is that the losses are quantified (or qualified) based not on some amorphous value of the box or the process itself, but losses are linked to the actions that the threat will take. Take systems in a highly regulated industries as an example.  Usually the most probable losses aren’t due to system compromise per se, but in the disclosure the compromise causes (regulators are a threat source, after all).  But many “risk management” methodologies will say “online banking is worth $2 billion, the value of the systems is therefore $2 billion”.  And suddenly we’re telling executive management that there’s a 60% probability that they’ll lose $2 billion.

4.)  If the primary source of prior information for your “risk management” methodology is a vulnerability scanneryou’re doing it wrong.  Chris writes:

So we ran a scan and now we have a report. A snapshot in time to make all decisions. Where did these vulnerability ratings come from? Do I even know if my system is at risk? What if I spend my time on vulnerabilities that have no threat?

So first, my thoughts are that actual “vulnerability” must be a comparison of the force a threat can apply, and our ability to resist that force (this is a probability statement, btw).

Changing your thinking about vulnerability now helps us understand the problem in several new ways.  First, you can start to divorce yourself from the scanner.  After all, the scanner is simply providing you with current state information that is usually just relevant variance from policy. It doesn’t really tell you about real “weakness in a system” because the system is an interrelated mess of people, processes and IT assets.

5.)  Finally, most “risk management” approaches just *don’t* do a good job of helping us understand the how’s and why’s of managing risk. In the past, I’ve referred to these standards as really being “issue management” because they are at their heart, an act of discovery – a formal process around gathering prior information.  They are not, in and of themselves, capable of linking the issues discovered to the root cause.  And these root causes?  Yeah, they’re the things that create “risk”.  Not a threat, not a vulnerability, not the existence of an asset – the amount of risk that we have stems from our capability to manage it.

So Chris, I completely agree – but I wouldn’t give up yet.  There actually are a few of us who are focused on what you suggest:

Where to go from here: A fundamental revamp of how to deal with Risk. Where risk professionals focus on the treating the sickness and not the symptoms, and come up with some new success/actionable metrics.

Chris, there’s nothing I want to do more than that.

So Logically, If She Weighs The Same As A Duck…She’s A Witch!

I usually try to stay far away from politics and current events, but my friend Rich has put up a blog post blaming the credit crisis on quantitative analysis, and then positing that because the economy sucks, Information Security should be only qualitative.

Now I’ve been “accused” of being a quant in the past (hi rybolov!) but in reality the only dogs I have in this fight are the model and the application of scientific method – and really, ethically speaking, I have to be tied to the latter while applying the former.

And I see a false dichotomy in this whole Quant vs. Qual thing.  We, as a profession, tend to create a political divide between the two which, if it even exists, I’d say is based more on our ignorance rather than our expertise.  After all, we are the profession that regularly multiplies across ordinal scales and uses wonderful models like R=VxTxI.   As someone  learning to deal in probabilities and rationalism, I have to recognize that this discussion is really just about the act of observation using different metrics of measurement.

But how we’re going about observing does not change the fact that there is measurement based on observation.  So if I’m working with you I can easily turn your qualitative scale into a quantitative one, and vice-versa.  Yes, Shrdlu, if we had the time, even your most seemingly Qual things could be Quant! (This flexible world view, btw, is an outcome of that new-fangled Bayesian thing).

COGNITIVE BIAS A-PLENTY

But back to what Rich is saying there about information security and risk – and he isn’t/won’t be the only one saying these sorts of things – we should try to understand what’s really going on rather than get caught up in the emotional hurricane.  Our profession suffers several forms of cognitive bias.  The nature of our jobs and what we do can cause us to be focused on the outcome and not the quality of the decision at the time it was made.  We want to bring in things from other professions that are useful, but at times we do view things outside our profession with false correlation to our own (unfortunately for those who write these sorts of articles, financial risk is completely different than operational risk).  We also have the tendency to focus on negative outcomes without acknowledging the positive outcomes (For example, I hear that Alan Greenspan’s new firm is up a couple of $billion in all this mess since he joined them, short sellers are doing quite well – must be because they have qualitative models or something -grin-).  The effect of these biases are compounded by the facts that proper correlation takes more work than we usually give it, and rational thought is not that easy when there’s a witch-hunt mentality.

Burn her anyway!

What also floats in water? (link to Youtube)

WHAT SHOULD WE BE THINKING ABOUT?

So as you and I read opinions that seem to be the polar opposite of irrational exuberance (and there will be plenty between now and the election) we’ll have to ask ourselves, “what really failed here?”  At the risk (pun) of over-simplification:

  • Was There an Error on the part of Probability Theory?

After all, Probability Science like all other fields of knowledge is always “advancing” as they say.  So perhaps probability theory is wrong somehow?

I’m personally disinclined to put the blame here, primarily because I would think that there would be evidence from other fields (like Quantum Mechanics) that something is amiss waaaaay before it hit a field like economics.

  • Was There Error In The Model Used to Determine Risk?

Some people who understand real estate valuation and complex derivatives and financial risk want to put the blame here.  It’s a little too early to tell, but one thing is for sure – Financial risk is so different from operational risk I couldn’t begin to hazard an opinion on the subject.   But it would seem that this is really somewhere we might look.

  • Was There Error In The  Scale Used (Quantitative vs. Qualitative)?

Honestly?  I find it extremely difficult to understand how this could be the source of financial ruin.

  • Was There Error on the part of the Decision Maker?

What if all of the above were just fine, and the decision maker chose short term gain over long term stability?  What if this was (to simplify the matter greatly) a choice of “heads” over “tails” and the coin landed on tails?  What if the model represented the right risk (probability of negative outcome vs. positive outcome), but the complex derivative was sold to someone else who had poor “risk management” (ability to make a good decisions)?

Now I have no clue about complex derivatives, and I’m oversimplifying to be sure – chances are like most things, there are several problems that helped create the primary cause. But it seems to me that as we go into incident response mode for the economy, it’s more helpful to do so in a rational, logical manner.

OTHER THINGS WE MIGHT WANT TO CONSIDER

Consider the Source
Some authors (who I think tend to exploit outcome and hindsight bias,and then combine those with indirect ad hominem attacks in order to sell their books), are actually putting forth arguments against the use of analytics.  The source of this is a current epistemic debate between those who believe that only falsification is certain, and those who maintain that neither proof nor falsification are certain, there are only probabilities.    So before you go believing any “quadrants” of usefulness on faith – I encourage you to understand what is at the heart of the discussion.

We All Have to Live In The Real World

The sun will rise tomorrow, and someone will try to find the source of the problem and do a better job.  Now chances are, they’ll be doing it in a quantitative manner.  Chances are also that at some point their models will fail and we’ll need to build new ones.  And this will happen whether the field is cosmology, economics, meteorology, information security, or professional baseball.

WHAT ABOUT YOU, ALEX?

I’m far from certain and subject to change, but these days I lean towards Robin Hanson & MIchael Lewis w/regards to placing blame.

Hansei and the CISO

Continuing our series on Hansei-Kaizen, you’ll recall that my thoughts are about applying the concept of relentless reflection (Hansei) and continuous improvement (Kaizen) to security management.  Today is a good day to talk about what should we be reflecting about, and what is needed for reflection.

I say today is a good day for two reasons:  1.)  BT’s CSO Jill Knesek wrote an article called “Keys to establishing an end-to-end security strategy” which begs some discussion within context, and 2.)  Sara Peters on Twitter last night wanted to know why I thought “risk management” requires more than what most “best practices” around the subject suggest the effort requires.

WHAT SHOULD WE BE REFLECTING ABOUT?

Jill Knesek’s article gives us a rough outline of how to develop a security strategy.  It’s fairly high-level, Pragmatic CSO-ish type stuff.  It gives us a nice outline of

  • Get a seat at the table
  • Process
  • People
  • Technology

Nothing earth-shattering there.  But it is a very nice broad CISO-level taxonomy about what we have to reflect on.  The need to reflect is driven by something Jack told me long ago,

“The amount of risk we have is a function of the decisions we made and our ability to execute on them from some point in the past”.

As an Aside:  So Sarah if you’re reading, this quote does much to explain why I said I disagree with much of what our industry calls “risk management”.  We tend to define the process of risk management as essentially a tactical “issue whack-a-mole” exercise. Find the issue.  Analyze the “risk” around the issue.  Fix the issue.  Repeat. This hamster-wheel-of-pain, while sometimes an effective tool for the CISO, is incongruous with addressing root causes (the ability to match a tactical issue to the strategic shortcoming that created the issue is up to the expertise of the analyst or consultant).  It is only Kaizen without (good) Hansei, if you will.

Back to what Jill is writing – the sorts of things we should be reflecting about can be thought of in context of her outline.  Namely:

  1. Once you have a seat at the table, what is the nature of that relationship?  Who are you reporting to and what are their concerns? What and how are you reporting and how might that be addressing their concerns?
  2. What processes are in place?, How do you know that those are the processes that should be in place? If they are, what kind of job am I doing at those processes?
  3. What is the quality of the skills and resources I have from a people perspective, and how do I know if they are adequate?  How do I know that the training they petition me for will effectively reduce organizational risk?
  4. Are the Technology solutions I have in place effective, are we managing them effectively, and what sort of States of Knowledge could they provide me with (to make good decisions and execute upon them, from above)?

This, for the CISO, is Hansei.  The continuous management of it is Kaizen.  Not to particularly pick on Jill’s article, but creating a “risk register expressed in ALE” might be fine if you’re trying to explain to the board what your “first 100 days in office” will be like – but these sorts of lists are usually not very strategic in nature, and as such, depending on the outcome of that risk register (and the models used to create it) it might not actually be useful.

WHAT IS NEEDED FOR REFLECTION?

So what is needed for this sort of CISO-level Hansei?

The CISO must understand the

  • Current State of Nature

turn that into a

  • State of Knowledge

and use that to create a

  • State of Wisdom.

CREATING A STATE OF NATURE FOR THE IRM PROGRAM

This Current State of Nature determination be done by applying analytical methods to a program audit.  We must understand questions like,  “What is in that program and how is it structured?”  before we can answer questions about “how (good/bad) are we at managing risk?”

There are many ways to structure an IRM program, but as an example – below is a graphic shared with me by Adrian Seccombe.  For those who know Adrian and the Trust Model – this is classified as “white” so it’s OK for public display and consumption.  But here’s what Adrian is trying to build at a high level:

So regarding Adrian’s program diagram:

  1. Is a governance framework.  Think ITIL.
  2. Is a risk framework.  Think ISO 27002 using FAIR as an analytical engine.  To be fair (pun) I believe this is really issue management, and it’s a process, but that’s OK.
  3. Reg compliance should be self explanatory.  That’s essentially what GRC products do for you.
  4. With architecture, I think Adrian is inclined towards TOGAF.
  5. Security is the ISMS in place (27001, ISM^3, PCI, whatever…)
  6. Are the processes that drive execution
  7. Monitor (audit) is creating a State of Nature and Evaluate is creating a State of Knowledge from that State of Nature around items 1-6.

EVALUATE – CREATING A STATE OF KNOWLEDGE ABOUT THE IRM PROGRAM

That evaluate is Hansei/Kaizen.  Evaluation, done effectively, will drive actual organizational risk exposure.  Evaluate will even answer those four questions we raised in the “What Should We Be Reflecting About” section above:

  1. Once you have a seat at the table, what is the nature of that relationship?  Who are you reporting to and what are their concerns? What and how are you reporting and how might that be addressing their concerns?
  2. What processes are in place?, How do you know that those are the processes that should be in place? If they are, what kind of job am I doing at those processes?
  3. What is the quality of the skills and resources I have from a people perspective, and how do I know if they are adequate?
  4. Are the Technology solutions I have in place effective, are we managing them effectively, and what sort of States of Wisdom do they provide me with (to make good decisions and execute upon them, from above)?

If we could have a nice metric (or set of metrics) that answers these questions, we might call it something like “My Ability To Manage Risk” or MATMR for short.

GETTING TO A STATE OF WISDOM

What’s then missing is how you create a State of Wisdom around the State of Knowledge developed – your “MATMR” metric.  That is, given the current State of Knowledge – how can I be most effective?  This State of Wisdom requires proper models for what risk is, and what you can do to manage it applied in a probabilistic manner (because we can’t intrinsically *know* the future, we can only say with some degree of certainty what the desired course should be).

So the outcome of Hansei/Kaizen should be to create a State of Wisdom about Risk Management.  This is why reflection must be relentless – because your wisdom must be similarly abundant.

This is no small part of the reason RMI exists, why we build software and help organizations understand the things they do.

Best, Good, Standard Practices

Dilbert.com

It’s like Scott knew it was my birthday and wrote a special comic just for me!

Risk and CVSS

Chris Hayes is taking me to town in terms of risk content with his last two posts on Risk & CVSS.  I told you his blog was going to be a good one.