Proper Risk Analysis *Can’t* Mean Unnecessary Controls

Sammy Migues over at the “Justice League” blog from Cigital has written an interesting article on “risk management”. Basically, he’s saying that we can have too much of a good thing. Too much risk management creates too many controls.

Except that it doesn’t.

Not the way we look at it at least.

THAT WORD AGAIN, “RISK MANAGEMENT”

First, I’d argue that our concept of risk management is a little more focused than his use. In fact, we don’t even know what “risk” means to him. Not to get caught up in terminology, but basically he’s dealing with some aspect of issue management. There’s a problem (a vulnerability, a policy exception to be discussed, whatever) and what smart people do is view the “problem” through the lense of risk (the probable frequency and probable magnitude of loss), via risk analysis and in the context of the risk tolerance of the data owner.

You see, that last part is important. If your definition of risk is correct, and if your analysis is good, then all that is left is for the decision maker to figure out how willing they are to lose money. Because you’re giving them the information they need to make a decision, unless you’re just absolutely loopy – you can’t overspend because of risk decisions. You will spend exactly enough*. There is no “fixation on or over-thinking of each and every security issue” because the data owner expends only the amount of resources they are willing to allocate to reduce probabilities to their acceptable level. That’s it.

If anything, “risk aware” organizations can be thought of as spending less than their counterparts because they’re not adhering blindly to “common practices” (of course, prescriptive regulatory environments prevent that efficiency by artificially inflating the amount of probable loss, but we’ve already talked about that plenty). I tend to think that risk aware organizations don’t necessarily spend less, they spend better.

So this all depends on what Samuel means when he says “risk” and “risk management”, but I have to respectfully diagree – when done properly, it is impossible improbable for risk management to create inefficiencies.

* it’s that whole Bayesian Rationalist thing. In theory the use of scientific method, logic and the right Bayesian network mean that any choice other than the conclusion(s) of good risk analysis are irrational.

** UPDATE:  Chris Hoff has written a very similar thought over at his blog and reality TV show, Survivor:  Corporate Risk Management Island or whatever he’s calling it these days :) (I kid because I love).

Semi-Security Weekend Reading

Happy weekend everybody!

You know, I read something in the magnitude of 180(ish) security blogs, a handful of mailing lists, and 2 print magazines as a way of trying to keep up on new (and good) thought in the profession.  Now I’d like to think that this keeps me on top of publicly available security related information.  One thing that I find, is that we tend to be myopes, focused on our situation like it’s a strange, unique problem that the rest of the world cannot help us with.  I actually find a lot of value every week in the non-security blogs I read.  So I’m thinking that I might try to put out non-security topics that I think are of interest in how they may correlate to the issues we face.

CERTAINTY, UNCERTAINTY, AND CLIMATE DATA

We have lots of uncertainty.  I was talking with an industry forum this week in which one of the participants asked me about “detectability”.  Their risk analysis methodology uses an estimated metric that essentially describes uncertainty for the ability “to detect” using this term.  FAIR, as we use it, forces the analyst to account for uncertainty in  every measurement and estimation.

You know who else has uncertainty?  Climatologists.  I’m going to try to stay away from the political discussion surrounding Anthropomorphic Global Warming (AGW) but the Real Climate weblog I read is a good source of information regardless of what you believe surrounding AGW.

This morning’s post from Real Climate has to do with a new paper in Science magazine that discusses climate sensitivity and uncertainty.  I wanted to bring The Certainty of Uncertainty blog post into your attention because it shows how we can use probability, the inherit uncertainty in any data gathering (and interpretation) effort, and still come to valid conclusions.  Too many times our “engineering” bent likes to pretend that we can only deal with information that includes “variance” rather than “uncertainty” (of course, they are very similar concepts).

the non-linear relationship between the strength of climate feedbacks (f) and the resulting temperature response (deltaT) … show(s) that this places a strong constraint on our ability to determine a specific “true” value of climate sensitivity, S. These results could well be taken to suggest that climate sensitivity is so uncertain as to be effectively unknowable. This would be quite wrong.

(emphasis mine).

Best Security Analogy/Description Yet

***Update*** More on this meme from Gunnar Peterson here.   He’s suggesting a “realignment” which, to me, sounds similar to Hoff’s “order or strategy”.*****

From Chris Hoff – in comments on Shurdlu’s Layer 8 blog:

…there is no “thing” (read: silver bullet) that the “market” will go for. There’s lots of silver buckshot but it’s applied without order or strategy.

So my questions for you, Internet friends:

  1. What’s that order or strategy worth?
  2. What if it could be quantifiable (you know, metrics), how much then would it be worth?
  3. If quantified order or strategy is obtainable but complex, are you willing to accept that cost?
  4. What if it involves challenging your pre-conceived notions of how the security world works?

I ask, because let’s face it - if this order and strategy exists, it’s not going to be simple, like a mash-up between ALE, a vulnerability assessment. If it were simple, it would feel like it’s been right in front of our noses, but maybe disguised by our own preconceived notions. Note also, that this complexity would almost have to be strangely similar but substantially different than what we do now. I would expect changing to be difficult, I would expect it would involve everything we do, and do it in a manner that is, at first, foreign to us. We would, in a phrase, have to count the costs…

I HAVE QUESTIONS, DO YOU HAVE ANSWERS?

So what are your answers? What is order and strategy worth? Would you really be able to change the way you think, the way you work?

Some Notes This Monday AM

Hope you had a good weekend!

RiskAnalys.is – Now With Special iPhone Friendliness! 

First, for those of you with an iPhone, RiskAnalys.is is now using the crescent fresh theme/plugin for wordpress, iWPhone.  So if you come and visit us with your iPhone, you’ll see a specially formatted version of the weblog.

Consumer Credit, Fraud, and That One Dude’s Social Security Number

Brent Huston has a post up over at his StateOfSecurity blog that excellently explains some of the finer points of identity theft protection services.   I speak for many when I say that I wish Brent could find more time to blog.

Layer 8′s Post is Good This AM

Go check it out if you haven’t already.

Congratulations To Miki

Friend ‘o’ The Blog Miki Calero has just taken a CISO position in local government.  Pretty cool…

Scraping and Scratching

TS/SCI  has a fun article on scraping the web for fun and profit.

Thank You For Not Calling It “Risk”

DannyL over at the Treasury Institutes PCI DSS blog talks about threat/vulnerability pairing for PCI. We here at RiskAnalys.is want to publicly thank him for not calling it “risk assessment” or part of a “risk assessment”.

Now if we can just get the Payment Card Industry to think about frequency…

Suspicious Minds (We’re Caught In A Trap)

My friend Mogull seems to have the blues. Hoff and Shurdlu give us their opinions. As for me, I tend to agree more with Shurdlu than Mogull. Imagine if IT were unionized. And the Union said that only CISSPs or security professionals were allowed to touch the union’s list of security software and equipment.

In the short term, there would be hell to pay. In the long term, well, businesses and vendors might actually be forced to do the things we’ve been asking them to (Chris Hoff stars in JerichoWorld – the InfoSec adaptation of Kevin Costners WaterWord!).

A POTENTIAL SOURCE OF THEIR FRUSTRATION?

That last bit led me to think about something nobody’s added to the discussion. You know why I think they’re so frustrated? Why Lindstrom famously wrote “Security’s over and we’re all going to jail”? Because we think we should “own” the risk tolerance of the organization.

WHEN ELVIS MEETS THE DALAI LAMA

We tend to measure ourselves from our reality and perspective, but if you’ll allow me:

  1. Suffering exists
  2. Suffering arises from attachment to our risk perspective
  3. Suffering ceases when attachment to our risk perspective ceases
  4. Freedom from suffering is possible by practicing risk management

SECURITY IS AN ILLUSION, A DREAM – OK, WHAT?

There is no security. In his presentation on Metasploit at yesterday’s ISSA meeting Aaron Bedra asked the question “What is ‘secure’?” The audience was silent so I piped up, “There is no ‘secure’.” There is only the act of securing. Too many times we forget that. We forget that when we write or read articles like this one. Bottom line, measuring yourself, your influence, by the yardstick of “secure” is a trap you will never escape from (cue Admiral Ackbar).

ENTER RISK (WHAT DO YOU EXPECT, THIS *IS* A RISK BLOG)

All we can do is implement the risk tolerance of our organization. That risk tolerance is quantitatively expressed by our budget and qualitatively expressed by the political viability we have among other silos in the organization. This risk tolerance will never be equal with your desired state of “secure”, nor will it include your desired state of influence. So the sooner you detach yourself from your risk tolerance and accept the one you are given, the better.

The good news is that we can influence that risk tolerance with good risk analysis and risk management. You’ll be more effective, you’ll be able to express achievement, current state and how that matches desired state (or suggest what desired states might be) for the organization. The bad news is that the change usually comes in small doses over time.

So maybe the path to enlightenment lies in a transformation from Information Security to Information Risk Management.

Don’t get too down on yourself – here’s a YouTube video to cheer yourself up with:

Antons – Maybe You Don’t See The Need Because You’re Looking At Something Else?

A post from Anton Aylward came to me via Anton Chuvakin called “Why I don’t see the need for elaborate Risk Analysis.”

“Standards” like a ISO-17799/27001, ITIL aren’t trying to do anything more than lead people though a process to make them deal with the basic good practices. When they talk of things like Risk Analysis they are trying to get people to think about risk and their risk posture, and that is, all to often, sadly, something most firms don’t seem to have got around to.

And then Anton basically offers that until you can do the “baseline” of “good” practices, don’t bother with “esoteric” risk analysis.
Somethings that jump out at me:

First, Can we stop pretending to be more intellectually honest by using “good” instead of “best” practices?

Despite the seeming rhetoric to the contrary a list of universally accepted “good” or “best” practices doesn’t exist. At best, practitioners use them similarly to what Justice Potter said about pornography – I can’t tell you what the best practices are, but I know one when I see one”. At worst, they are created on the fly to justify an ad-hoc risk analysis (playing cyber-cop) “best practices say you can’t do that, neener, neener, neener.”

As this blog has mentioned before, the entire concept of “good practice” is simply a lazy man’s risk analysis. In as much as it would seem to be good and professional to the reader to do as Donn Parker suggests and hope that we could be standardized like accounting principles, the reality is that security is far too dynamic for that analogue to work (which is why I always find it odd that good practices are offered as a remedy by those who would suggest that risk analysis doesn’t work because attackers are “asymmetric”. I’m not sure this asymmetry is relevant to the study of risk, but we’ll talk about that some other day).

So “good” practices aren’t really “good” at all. If you want to be really honest, then call them “lazy” practices. I would argue that the concept of “X practices” is more esoteric and nebulous to our specific realities than our ability to account for uncertainty in the metrics we have for the factors that make up risk.

Second, Risk Analysis (or Risk Assessment or Risk Management for that matter) Is Not Vulnerability Management

This is a nitpick of mine, but thinking that risk belongs only where ISO 27001 says it does is silly. Using risk only where the ISO tells you to – is even sillier. It’s simply an immature view of risk’s relevance to the Security Program – one that this blog has suggested is a relic of our myopic focus on vulnerability assessment. Risk Analysis/Assessment/Management is not tacking some “ease of exploit combined with loosey-goosey BIA data” onto a vulnerability management cycle. In fact, they are three different things that are not to be confused. If you’re doing traditional threat/vulnerability pairing, and/or not using frequency, you’re not doing risk analysis. If you’re not judging maturity of process and capability of process actors, you’re not doing risk management. If you are looking at a discreet assets and ignoring the interrelated nature of networks – you need to find a better way to assess risk.


As the kids say these days, “You’re Doing it Wrong”. Ok, actually, we’re (mostly) doing it wrong.

Third, Anton Is Asking For Risk Management

That last bit above about understanding maturity and capability is really quite important. What Anton Aylward is saying is that the maturity of your organization matters. Yes, yes it does. Just much more so than I think Anton realizes.

Too many times I see Risk Analysis confused with Risk Management. Too many times I see Risk Management confused with discreet risk issue analysis (ahem, ISO – I’m looking at you). The “What Is Risk Management” question is too large for this post, or even a blog post, but your capability to manage risk encompasses an understanding of all interrelated factors (not FAIR factors, kids) of program management. Among those factors are the maturity of process “Do we understand that what which the business is supposed to be doing?” and the maturity of our capability to perform that process “Do we have any clue as to how to manage (skills, resources) our part of that process?

Anton’s assertion is that if our gut tells us our capability to manage risk is really poor, a risk analysis is superfluous. I would back up and offer that if we have poor risk management, then risk analysis is necessary in order to find out where the uncertainties lie, and what then should be done in order to have proper management of risk. Here’s the rub – if you’re doing “best” practices, you’re really just using someone else’s risk analysis – but you have no idea if it is relevant to you or not.

Finally, I’m against Elaborate Risk Assessment

If it means the every 18 months super-Risk Assessment.  Good analysis should be done several times a day.  A good framework for analysis will change the way the professional approaches their job.  It trades the cyber-cop and the law of best practice in exchange for a scientific approach.   Risk analysis must be done well to be useful, and should be done frequently.  We’re doing it wrong.

Kaspersky’s Viral Videos

The Security Mendoza Line – Metasploit

When we think about risk, one of the key concepts we like to understand is the strength of our controls. If you think about it, we spend a lot of time gathering information about patch levels, vulnerability scans, and audits of control functions. If there’s one thing our profession is good at, it’s trying to understand the weaknesses in our systems.

Thing is, in order to really understand risk – we need to compare the strength of our controls to the level of force an attacker will apply to them. To this extent, in FAIR, we define Control Strength as:

The strength of any preventative control has to be measured against a baseline level of force.

One of the fun discussions around Control Strength asks the question, “what is that baseline?”

ON THREAT CAPABILITY, CONTROL STRENGTH, GAUSSIAN DISTRIBUTIONS, AND LIGHT-HITTING SHORTSTOPS

It works well to think of both Control Strength and Threat Capability as population distributions. Somewhere out there – there is this population of threat agents. They have some level of skills and resources, some are better than others, some are worse. Because we have no evidence to the contrary, it is a “good statistical practice” to use a standard distribution to represent the # of threat agents and their capabilities (see Jaynes/Bretthorst, Probability Theory: The Logic of Science).

Similarly, the same applies to the strength of our controls. Control Strength can be represented by using a standard distribution. The functions of our controls (Prevent, Detect, Respond) and our capability to successfully manage those controls are represented in this measured estimation.

Now when we go to compare Threat Capability, and our ability to resist the force applied by that Threat – we must take into account the category of threat we’re measuring against. There are nine major categories of threats, but most of the time we worry about threats that are Technical in nature, and we’re generally worried about threats from outside our perimeter of trust. We call these Threat Communities the External Technical Professional or Amateur. Using the qualitative label “Professional” vs. “Amateur” creates a nice semantic divide for the analyst. Many times, thinking of the difference between the two can help the architect, log/event analyst or risk analyst filter the information they need to process for relevancy.

Wouldn’t it be nice if we had something that helped us divide who we considered “Amateur” and who we considered “Professional”?

Mario Mendoza played shortstop for the Pirates, Mariners and Rangers about the time I really first got into baseball (mid 70′s). He was a very good defensive player – he had an adept throwing arm, excellent range, and a very good glove. Unfortunately for Mendoza, he was as bad on offense as he was good at defense. He struggled his entire career to hit .200 (one base hit every five at bats). This prompted his teammates to declare a .200 batting average The Mendoza Line – suggesting that .200 was the measurement that was the minimum amount of offense a player could provide to justify their place in the lineup. Hit below .200, and, well, you better get back up above the Mendoza Line or face demotion. The ability to hit .200 separated the professionals from the wannabes.

When thinking about technical controls I have in place, I use a Mendoza Line mentality of my own. That comes to us thanks to H.D. Moore – a founder of both DigitalDefense and DigitalOffense – and the principle figure behind the Metasploit framework. Metasploit is easily available, easy to use (click’n'drool) and has a significant amount of “brand recognition”. These three factors alone make it useful for us to use as our “Mendoza Line” between Professional and Amateur, and use as a yard stick when thinking about the strength of our controls.

Now I’m using Metasploit as one example, there are other tools for other uses that can be thought of in the same manner. Point is, the amateur can be defined as those whose competency stops at what Metasploit (or some other “Mendoza Line” tool) can do for them, and the professional are those whose expertise extends beyond what is commonly available in a format needs only a modicum of UNIX experience to use.

So when considering the strength of your controls, the attackers you wish to study – consider your baseline measurement of force here.   It may not be too hard to find a point of reference to use.

iPhone Hacking For The Masses, Update

Jailbroke, according to TUAW.

New iPhone Exploits, or Hacking Becomes a Pastime for the Masses

I’ve posted my thoughts on the iPhone on my personal blog because, for the most part, the past discussions have had little to do with information risk or security.  And while this morning brings news that new exploits have been discovered – I think what’s very interesting is the fact that thanks to blogs and the popularity of the iPhone, we’re starting to see platform hacking become a spectator sport for the masses.

To be sure, hacking as a spectator sport has existed for years now, but this seems to be the first time that I can recall that the audience has gone beyond video game users (the old xBox opening) or deeply geeky folks (putting Linux on this or that).  I imagine that iPhone hacking is now common conversation in churches, coffee shops, and around the water cooler.

IMHO this is good.  John and Suzy Q. Public  may not understand exactly what a buffer overflow is after the iPhone hacking hits TUAW or  CNN, but they’ll know that they exist.

Good luck, Erica Sadun and others – I’ll be rooting for you to at least raise awareness.

(image from sci-fi.com)