Chris, who is a male in Government C&A has a blog with a wonderful title: How is that Assurance Evidence?
I’d love to have another blog even more specific – “Ok, that Assurance is Evidence Of What, Exactly?
Today he has a great article called:
And “in short, it’s everything.” It pretty much sums up why I had to grow to re-evaluate how our industry does risk, risk management, approaches controls & vulnerability and find a new way. A couple of things jump out at me in reading Chris’ article:
1.) Just because that Deming cycle sucks and is full of unknowns doesn’t mean “risk” doesn’t exist, nor that it isn’t of primary importance. Nor does it mean that in the absence of model & methodology, we won’t be “doing” risk analysis anyway – just in an ad hoc method and completely from “the gut”.
Our industry calls these unstructured risk analysis “Best Practices”, as it’s an easy and convenient way of sweeping the unknowns under the rug of bureaucracy and enforcing it via peer pressure.
2.) What this “suckiness” does mean is that your model and methodology aren’t helping you. As Chris intimates, there is too much uncertainty in the inputs for his model (they are, in the language of Bayesians – too subjective to be useful priors).
Take for example how we might be approaching the “controls” part of our analysis. Chris writes:
“2. What are the controls that we have to employ?
800-53, ISO 27001, PCI, etc.
Still kinda good, but we basically know that ISO is relatively voluntary and NIST supplies a control catalog and not policies. So here we have to take the control catalog, and mash our policies into it.”
I wouldn’t call this “kinda good” at all These control catalogs only provide a hierarchy within which to look for evidence of our ability to resist an attacker. They are incapable of making any claim about the effectiveness of the controls when they are operated at 100% efficiency, or more importantly, what % efficiency our specific organization operates at.
Let’s use Chris Hayes’ Initech as our fictional example.
Initech has a control (a back door on a loading dock). Now the locks on the door are 100% capable of locking the door. This is different than saying that they are capable of frustrating all but the top 5% of lockpicking burgalars. It is also diffferent than saying that in a sample of several “walk around audits” the doors are left open 20% of the time (they are not in compliance with policy 100% of the time). Even worse, that 80% of the time the door is not propped open? Yeah, tailgating is a known issue.
So we have several different variables here that we need to account for (and it’s just a door). But the analogy stands that most “risk management” methodologies are “We have a door, yes/no?” And most GRC platforms, when asked for their “opinion” will simply say “door is needed” or, even worse, “a door policy is needed”.
3.) Criticality and the Source of Value is all messed up in these Risk Management models.
Someone wants me to tell them which boxes are more critical than others. This is mainly because of budgetary or operational reasons. To which I usually say “All of them, it is a system after all”.
This literally made me laugh out loud. And this sort of “rate the firewall as Risk = 500 but rate the actual business application as Risk = 157″ thing is also endemic. Now Chris is very smart here. He correctly identifies that the value is tied to the business process the systems support, and not to a specific box. Oh, we scan at the specific box level – but because of the nature of systemic failures – all the boxes in the process are inexorably interrelated.
One of the reasons I really like FAIR is that the losses are quantified (or qualified) based not on some amorphous value of the box or the process itself, but losses are linked to the actions that the threat will take. Take systems in a highly regulated industries as an example. Usually the most probable losses aren’t due to system compromise per se, but in the disclosure the compromise causes (regulators are a threat source, after all). But many “risk management” methodologies will say “online banking is worth $2 billion, the value of the systems is therefore $2 billion”. And suddenly we’re telling executive management that there’s a 60% probability that they’ll lose $2 billion.
4.) If the primary source of prior information for your “risk management” methodology is a vulnerability scanner – you’re doing it wrong. Chris writes:
So we ran a scan and now we have a report. A snapshot in time to make all decisions. Where did these vulnerability ratings come from? Do I even know if my system is at risk? What if I spend my time on vulnerabilities that have no threat?
So first, my thoughts are that actual “vulnerability” must be a comparison of the force a threat can apply, and our ability to resist that force (this is a probability statement, btw).
Changing your thinking about vulnerability now helps us understand the problem in several new ways. First, you can start to divorce yourself from the scanner. After all, the scanner is simply providing you with current state information that is usually just relevant variance from policy. It doesn’t really tell you about real “weakness in a system” because the system is an interrelated mess of people, processes and IT assets.
5.) Finally, most “risk management” approaches just *don’t* do a good job of helping us understand the how’s and why’s of managing risk. In the past, I’ve referred to these standards as really being “issue management” because they are at their heart, an act of discovery – a formal process around gathering prior information. They are not, in and of themselves, capable of linking the issues discovered to the root cause. And these root causes? Yeah, they’re the things that create “risk”. Not a threat, not a vulnerability, not the existence of an asset – the amount of risk that we have stems from our capability to manage it.
So Chris, I completely agree – but I wouldn’t give up yet. There actually are a few of us who are focused on what you suggest:
Where to go from here: A fundamental revamp of how to deal with Risk. Where risk professionals focus on the treating the sickness and not the symptoms, and come up with some new success/actionable metrics.
Chris, there’s nothing I want to do more than that.