“Security requires a particular mindset. Security professionals — at least the good ones — see the world differently. They can’t walk into a store without noticing how they might shoplift. They can’t use a computer without wondering about the security vulnerabilities. They can’t vote without trying to figure out how to vote twice. They just can’t help it.”
- Bruce Schneier
For me, acquiring a “security” mindset wasn’t tough. I was lucky enough to work with some great penetration testers. The whole “social engineering” thing was easy to “get”, too. By my second engagement, I acquired a love for figuring out how to manipulate the denigrated bureaucracy.
The problem with the security mindset is that, in risk analysis, it carries over as a bias. When I’m out training organizations, there’s usually a really smart guy with ages of cybercop experience who will devolve the conversation about Vulnerability (Threat Capability vs. our Controls) into how he would use his knowledge of the systems and their weaknesses to possibly steal millions and millions of dollars/identities/trade secrets/whatever in a particularly clever way. It happens every session. It’s not a bad thing – but it has to be qualified within the context of the applicable threat community. Are we really worried about an uber-brilliant admin with 20 years at the company and intimate knowledge of the systems architecture as a threat community? Maybe we are, and if so this is a great and relevant discussion.
But if we’re not able to throw the resources at a problem needed to address someone whose skills and resources are in the top 1/10 of 1% of the threat community out there, what we’ve done is had a rabbit trail conversation that *if* an attacker had near perfect knowledge of the system and it’s defenses, it would be possible to evade prevention, detection, and most likely response until it was too late. Great, but there’s a bias there that we’re carrying into the discussion because of the security mindset.
Thing is that once the security mindset matures with experience we *know* that it is possible for any system, regardless of physical location or vendors that supply software, to be compromised. The question the risk analyst must answer however, is really “What is *probable*?”. And we should really belabor the point that “What is probable?” is not just a “Can it be done?” question. Yes, Level of Effort or Skills & Resources are relevant pieces of prior information, but what is similarly (if not more) important is the concept of frequency of events – or “*Is* it being done or more likely to be done in the future, and at what rate?”
EXAMPLE OF THE DIFFERENCE
There should probably be a Godwin-esque law about 9/11 examples and security by now, but you’ll forgive the indulgence. Post 9/11, we had all sorts of questions about the risk of attackers and national infrastructure. And the reason isn’t because we couldn’t imagine all sorts of creative attacks against nuclear power plants, metropolitan water supplies or large visibility entertainment venues. Our uncertainty was due to a perceived possibility in an increase in frequency. They did something spectacular once, (when) will they do it again?
This should be the mindset of the risk analyst. Understand that it can be done, and how it may be accomplished, to be sure. But it’s imperative that we frame that knowledge within the context of frequency and impact considerations.
For me, the good news is that mindests don’t seem to be fixed. Training analysts in FAIR has shown me that these mindsets can be learned and unlearned. In fact, I’m starting to think that a sign of IQ/EQ/Whatever might be said to be the speed with which one may adopt other mindsets.