Another Really Lousy Security Analogy

(NOTE: content may be light this week as I’m performing an audit based on BITS Agreed Upon Procedures.)

I was thinking of another reason why checklist based approaches fail, and I came up with yet another lame security analogy. I thought I’d force it on you. In thinking about this audit, I realized that data is to an organization (especially an F.I.) what syrup is to my 3-year-old eating pancakes.

(not my kid, but flickr user Big DC’s)

Without adult supervision, my boy puts an enormous amount syrup on his pancakes. You might as well take the little flow-limiting mechanism on the top of the squeeze bottle right off and let half a bottle of maple goodness douse the plate. Now the consequences of this much syrup are twofold:

  1. It tends to make the child hyper. The sugar high must be enormous. The child is bouncing off furniture like a pinball, chattering away at about 10,000 words per minute, reciting the Wiggles and Tolkien with equal ease, giggling uncontrollably one minute, screaming at the top of his lungs the next. On top of it all, his tactile senses are in overload. He’s touching everything in sight — which complicates the next part:
  2. He’s sticky. Very, very sticky (see the young lady, above). And thus everything he touches is sticky. A week later I’m wondering why there’s this patch of rug fuzz on the TV remote, why the “T” key on the boy’s iMac is stuck in the down position, why there are sticky brown patches on my stupidly expensive camel’s hair overcoat, why the cat hasn’t moved from one spot in about seven days…

Data, for financial institutions, is the same way. The abundance and new, cheap and easy access to data makes business development ecstatic. They’re offering access to this, they’re crunching those numbers, they’re doing new, wonderful and creative things with your and my personal identifiable information. They’re bouncing off the cubicle walls.

But while we security professionals are stuck herding managing the child on the sugar high, everything is becoming sticky. Sticky with data. It’s on servers and desktops. It’s over wireless and wires. It’s near and far — crackberry’s and laptops, off to business partners and used test environments. Reams of the stuff is being printed, tons of it being talked about, and like my furniture, toothbrush and wife’s hairbrush, sticky we find out at inopportune times that stuff is definitely where it shouldn’t be.

How does this pertain to checklist approaches like BITS Agreed Upon Procedures (AUP) (or the ISO, or whatever)? Easy. Checklists don’t take into account the “stickiness” of data. Specifically picking on AUP, there’s nothing about not using live data in test environs, or figuring out how/when data touches “portable electronic devices” and how those should be secured down. Now maybe those things will be covered in a future version, but for now they’re missing. It’s not like laptop theft issues aren’t well known.

Not to mention that there’s no “risk driver” for AUP. It’s binary, a function — say a “vibration alarm sensor in the Secure Perimeter” is either there or it isn’t. It doesn’t matter if risk analysis says it’s effective or not: if you aren’t protecting your Minnesota bank from earthquakes, you’ve got a frowny face sticker on your AUP audit — and that goes off to whomever the audience of the resultant document is.

The bottom line is that as long as checklists and risk assessments focus on assets and best practices and not business processes and data, we’re going to have these problems. We’ll clean the kitchen floor, only to find sticky syrup in places we least expect.

Laws of Simplicity

Non-risk related post, but a concept to ponder in our approach to risk, value, metric development and reporting.
Found via Presentation Zen:  The Laws of Simplicity book and weblog.
Worth checking into.

IP address/Geographic Locator

Finally, a Web 2.0 Mashup I can use!  Enter an IP and it shows you a Google map location for the IP. 

Metrics. Durned Metrics and Statistics.

In light of the online discussions surrounding metrics (and here and here and here), I’d like to bring us back around to the “why” question.

Why should we be striving to build great metrics?

Please don’t surf away just yet — the answer to this question may not be as self-evident as you think.

Over the past year or so of following weblogs I’ve read articles that span the gamut from InfoSec-doesn’t-get-enough-credit to wondering if InfoSec, as a stand-alone function, will continue to exist in the future. I’ve seen the worth and value of various controls debated. I’ve read quite a bit about people building metrics that measure this and that. I’ve seen frustration as bloggers find out about incident after incident that shouldn’t have happened if the vendor had simply implemented the most basic of controls.

When I think about these things this morning, and look at the “metrics subject” and think really hard about what I could write that might be of help to us — I immediately turn to one of Jack Jones’ most recent presentations at CSI: “Value? What Value?” I’ll try to see if I can make the keynote/.pdf available to everyone, but his summary slide is very poignant. The first bullet:

It’s not our perspective/beliefs that matter — value is in the eye of management.

This statement is fundamental to our understanding of who we are. Saying things like “we should just expect/teach management to respect our craft and best practices and ignore risk” is therefore foolishly naive.

Now, Jack moves on after that bullet with this one:

Work with management on defining what matters and which metrics are useful.

Huh. Go figure. Give your boss some input on what matters. Herein lies the basis for metric development. To increase our perceived value, and to express that value by measurement.

In his brilliant reductionist manner, Jack cuts through the fog and uses his next bullets to clearly point us in the direction of where we can develop metrics:

Infosec’s value opportunities are…

  • Reducing loss
    • Measured through incident statistics
  • Reducing risk
    • Measured through assessments and improvement objectives
  • Improving risk management capabilities
    • Measured through project and operational statistics
    • Illustrated/described in risk analysis quality improvements

I think Jack’s stated the reason for metrics succinctly. As I look at what he’s come up with there, I don’t know that I can add to it or remove from it.

Concerning ROI

I’ve said many times before that ROI is tricky waters. As I review the value opportunities above, I can start to see where we can express the usefulness of past investment to an organization by utilizing loss reduction, risk reduction and the creation of capability/operational efficiencies. Is that true ROI? I don’t know yet. But it’s worth thinking about.


FAIR Critique?

Kiwi Blogger (did I ever tell you how I’d love to be a Kiwi?) NoticeBored Critiques FAIR.

I responded!

Inefficiencies, Politics, SOX Risk and WHOOPS!!!

Around the Security Blogs in 30 minutes, I’m Alex Hutton.

Inefficiencies in Security/Risk Management

Dark Reading has an article on email surveillance. Here’s a performance metric for ya’:

According to the survey, respondents are spending a median of 12 hours per week for every 100 employees to review 10 percent of their emails.

Wow. It would be worth running a FAIR analysis on data leakage via email to see if it justifies that level of effort. I’m going to guess the answer is No.

Speaking of inefficiencies and metrics, Amrit Williams (good blog, I subscribe) and Mike Rothman (one of my absolute favorites) are talking metrics. Well, they’re talking performance metrics, mostly, which is only one useful category of metrics. I think the most interesting part of the discussion is that Amrit is proposing a Security SLA. Wow. Now in the linked post, he doesn’t get too specific about what exactly that SLA would be, but he’s going to put out a white paper eventually. A Security SLA… maybe it’s just me and what I know of MSSPs, but that’s Texas-Sized (if you’re not familiar, “Texas-Sized” is a phrase suggesting one has a large amount of, um, bravado).


NoticeBored (a kiwi security blogger) has some of that goofy “you can only leave comments if you have a blogger account” goin’ on, so rather than leave a comment there about his post on authentication, I’ll mention it here. It’s been my experience with banks and credit unions that no one but the most naively optimistic vendor believed that “regulations were anticipated to force US banks into using tokens for user authentication by the end of this year.” Regulators/Examiners and F.I.’s perform a somewhat beautiful dance when these things come out. Rarely in the past ten years have I seen either party be unreasonable. I also would offer that anyone with any experience dealing with US banks and credit unions (esp. cu’s) would say they are NOT against sharing control solutions. In fact, that sort of information sharing is pretty rampant, and I think is working pretty well.

SOX – Not Just About Wasting Money on Japanese Starting Pitchers!

This week the Boston Red Sox spent $51 million just for the right to talk about a contract with a 26-year-old Japanese phenom with too many innings under his belt….


Numerous public companies spent similarly silly amounts on SOX section 404.

Big4Guy (a good IT and SOX blog) has an article on “How to Link Financial Statement Account Assertions and Risks.” We get a lot of Internet travelers to this site looking for SOX risk information. All I can tell you right now is that from what I’ve seen — IT “Risk” expression in SOX terms is actually what you and I would call “threats.” Unsurprisingly (given what SOX 404 actually says) there’s a ton of uncertainty, unsophisticated auditors, and general confusion. This is too bad.

Talk about the government creating market inefficiencies…

Which is More Frightening

Airport Arrest Turns Up Nuclear Info

Larry King has no idea what the Internet is about

Big Wheels Keep on Turnin’!

In comments to Monday’s note from Mike (who writes for one of my favorite blogs, Episteme):

I’ve seen far too many hamster wheels that don’t even understand the concept of risk as a holistic idea.Let me throw out an idea here — when talking about risk, can you say that the concept that some of these “hamster wheels” you’re discussing is actually “network risk management,” rather than “risk management”? I have been thinking of late that many security pros are incredibly “network-centric,” and that things like “user awareness” are soft and fuzzy and not part of the awareness of the whole system for most security pros.

Does it make sense, for the sake of thinking about products like this, to make this differentiation (since we can’t possibly talk about them in terms of holistic, all-business risk)?

Thank you Mike for serving me a softball (grin)! The first thing we have to — have to — focus on is our definition of “risk.” I think we can agree that risk is not “threat” or “vulnerability”: but using those three terms interchangeably is, for some, an old habit, and I think that’s where some of the problem comes from. Other folks use a dictionary definition for “risk” (which I can’t really call invalid, since it’s in the dictionary after all) that is synonymous with IT threat or IT vulnerability. I don’t think people using risk in this sense are being purposefully disingenuous; they just haven’t thought the “risk” concept all the way through.

To me, if we’re not talking probability, and if we’re not talking loss, then we’re not talking risk. Maybe a contributing factor to risk, but it’s not risk itself. And that’s why, while not antagonistic to the concept of “network risk management,” I would have to understand exactly what we’re talking about and what any sort of measurement of network risk management would mean to me.

I have been thinking of late that many security pros are incredibly “network-centric”

This is real understandable. Most folks in operational security are immersed in IP. Actually, now that I think about it, I would say that most security pros are incredibly “asset centric” — and have a tough time really framing the role of any particular asset in the network. Again, this is natural. Our scanners give us information on an asset and the asset itself provides us nice, safe boundaries within which to operate. There may be a mid-layer of folks who are “OSI Model” centric, too – but understanding the business relationships between assets escapes most of us, and unfortunately that’s how we derive risk, by understanding the role of the asset in network and business process.

Recently, I feel like I need a big ol’ “Understand Your Business Processes” soap box. I think that understanding this is key to understanding your risk. You may have some serious policies; you may even have a really nice diagram that shows all your security tasks you perform on a daily/weekly/monthly/quarterly schedule. These are fine, but until you have a list/map of every business process that’s dependent on networked resources — and what/where those resources are — you’ll never understand applicable threats, applicable controls and applicable loss.

Finally, Mike and everyone else, read Andrew Jaquith’s entry on the Hamster Wheel. He was one of the first “risk believers.” But now he’s ready to create his own denomination (if you will) based on metrics because so many vendors are maligning the term “risk management.” Me? I want to sit and fight for what’s right — real IT risk — and how it works. Call me a zealot, but once you understand risk (thanks, FAIR) you can’t use anything else. (Do consider this another plug for our training, by the way!)

I’ll end by saying that there’s nothing wrong with the Hamster Wheels of Vulnerability or Asset Management. These things are good things! It’s just that they aren’t Risk Management.

Good Articles to Read Today

Just an FYI: We added links up at the top menu there. Folks have asked me what blogs I read — the new page includes my OPML file for your use. It also has a link to what I’ve seen today and what I enjoyed reading.

We Don’t Have to Be Trapped In A Cage (Despite All Our Rage)!

Recently I was given the opportunity to review a couple of chapters from Andrew Jaquith’s forthcoming book on Security Metrics. For those who don’t know, Andrew is a senior analyst with the Yankee Group, formerly a co-founder and principal consultant at @stake, and devotes an inumerable amount of time and effort to helping our profession out at The chapters I read were both entertaining and thoughtful. I found myself nodding in agreement more than a few times.

As an analyst, Andrew willingly submits himself to Powerpoint Hades — a seemingly eternal punishment at the hands of vendors who want to show how wonderful and relevent their products are to IT Security and/or Risk Management. I won’t reproduce the whole excellent article, but he has a great blog post on what he calls “The Hamster Wheel of Pain.” From his weblog:

Nearly everyone shows up with a doughnut-shaped “risk management” chart whose arrows rotate clockwise in a continuous loop. Why clockwise? I have no idea, but they are always this way. The chart almost always contains variations of these four phases: 1. Assessment (or “Detection”)
2. Reporting
3. Prioritization
4. Mitigation

Of course, the product or service in question is invariably the catalytic agent that facilitates this process. Follow these four process steps using our product, and presto! you’ve got risk management. From the vendor’s point of view, the doughnut chart is great. It provides a rationale for getting customers to keep doing said process, over and over, thereby remitting maintenance monies for each trip around the wheel. I can see why these diagrams are so popular. Unfortunately, from the perspective of the buyer the circular nature of the diagram implies something rather unpleasant: lather, rinse, repeat—but without ever getting clean.

He’s even created a collection of “Hamster Wheels” for your review.

That’s as Maybe, But it’s Not Risk Management

After viewing his collection — the one he uses in his book, and even some very nice ones online that aren’t in his collection — I can tell you without hesitation that these are not Risk Management (nor, I believe, does Andrew think they are). Very nice vulnerability management cycles, maybe loss event management or asset management, but it’s not Risk Management (capital “R” capital “M”). Why? Even the most detailed Hamster Wheel is inadequate. You can’t just add “risk analysis” or “risk assessment” to a vulnerability/asset management cycle and then call it Risk Management — but that’s what most of these diagrams are doing.

Alex’s Litmus Tests for Risk Management

Next year you are going to get vendors selling boxes that do automated penetration tests, NIST 800-30 (or other) based assessments, even entire suites of programs and services claiming to be “Risk Management.” How can you tell what isn’t “Risk Management”? Let me give you three easy tests. If the proposed solution fails any one of these, you know what you’re looking at may be a piece of how we can better manage risk, but it’s not “Risk Management” per se:

Test 1: Does it leave out awareness?

This is one of the easiest ways to debunk “Risk Management” in a wheel, box, rhombus, or what have you. Risk Management includes just about everything your department does for controls — if a vendor’s wheel doesn’t include end user awareness, it’s not Risk Management.

Test 2: Is it asset or vulnerability focused?

Remember what we said about controls and risk? That controls are only one of eight factors to consider in calculating risk? Same is true for vulnerability.

Vulnerability management != risk management. As I said above, adding risk analysis to a vulnerability management process does not make it Risk Management. You cannot assign risk for a particular “vulnerability” because any one vulnerability can lead to any number of loss events and just about any amount of loss. A wheel that begins with vulnerability discovery and ends with “mitigation and re-assessment” has alarmingly little to do with risk.

In the same way real Risk Management involves focusing on business processes and not individual assets. This is why NIST 800-30 fails at its job, why we have such a difficult time isolating relevant threat-sources (or even worse, threat-vulnerability pairs), and judging real impact. We focus on risk for an asset, and that asset is usually only one part of a business process — one piece of the full puzzle!

This cannot be understated: you cannot “fix” risk just by fixing vulnerabilities, and you cannot determine your risk by considering particular assets in isolation.

Test 3: Will it get me out of my Hamster Ball?

Not only do security professionals feel like they’re on a Hamster Wheel of Pain, but many times they also feel as if they’re isolated, like the hamster in the plastic ball toy. Unappreciated, under-represented — as if the rest of the company wants them to just have fun in their little plastic balls, running around the cubicle farm as long as they stay in the protective plastic bubble of firewall management and don’t cause any difficulties. Why do practitioners feel this way? Because they lack credible means to communicate their value to the rest of the organization. That’s what Risk Management does for us — it gives us value. This value is communicated in our conversations with other parts of the organization — how we approach projects, policies and processes.

So if the Hamster wheel you’re presented with doesn’t include awareness, S, C, & A processes or policy exception handling processes, if it can’t help you decide what the right vulnerability tolerance window is, and if it doesn’t express your value to the organziation — it’s not Risk Management.

On Controls

How much do our controls contribute to the risk equation?

Take a look here:

Note that at the level that we can begin to consider the impact of our controls, it’s simply 1 of 8 factors.

I mention this not to impart a lack of importance to controls, but to get you to think about risk differently.

Controls are vitally important for many reasons, but in summary:

1.) They’re the factor we have the most control over. We can change the aggregate strength of our controls, we can’t really affect loss magnitude or TEF factors.

2.) We cover this in RMI training (another shameless plug, I know) but Controls can have effect on other risk factors – like TEF and loss factors. All of the factors of risk are interrelated and important, but contrtols do have that unique capability.

3.) It’s the focus of the conflict. If Vulnerability is a ratio that describes the result of our battle with threats (TCap vs. Control Strength) – then it’s this battle that determines the outcome of the "risk" war.

Fishy, Fishy, Fishy, Phish

Hey everybody. In case you missed it, October (their first month) Phishtank statistics are up.

Let me be among the first to say that this is a wonderful site, and of great use to the community. These are real TEF numbers for use in your Phishing risk models, all you F.I. readers!

How cool is this site? They even have a Firefox extension.

I want to say thank you to everybody involved in Phishtank.