Fear and Loathing in OS X Security Land

“We can’t stop here! This is bat country!!”

This article from CSO magazine — “Symantec Warns of Mac Phishing Threat – cracked me up. At first, I wondered how there could be a specific Phishing attack for OS X. After all, Symantec’s Enrique Salem, president of consumer products, is talking specifically about the OS X userbase — and at least part of the time (turning into the majority of the time), that’s me! So let’s read the first two paragraphs of the article:

There is a real danger that people think they are secure on the Mac when they aren’t, according to Enrique Salem, Symantec president of consumer products and solutions.
Salem spoke about how the creators of phishing schemes, which seek to obtain your confidential information, are becoming more sophisticated. “The attacks are much more socially engineered,” he explained. “They are trying to figure out what the user will respond to, and that means it doesn’t matter what computer you are using because whether you are on a Mac or a PC, you get e-mail.”

GAH!!!! The article is nothing but FUD. There’s no specific “Mac Phishing Threat” — that’s all a hallucination of Salem’s.

In my view, both Apple and Symantec are on the opposite ends of the FUD spectrum here. Apple knows that until there are specific malware threats, they can continue to run cute advertisements claiming there to be no threats to the current state of OS X. When I see these ads, I feel like Hunter S. Thompson’s Raoul Duke:

No point in mentioning these bats, I thought. Poor bastard(s) will see them soon enough.

At some point, the “bats” of malware will stop being my paranoid hallucinations, and start being real. It’s pretty much inevitable. I run MacScan and ClamX and caution other OS X users who make careless remarks about safety, but I always qualify it by talking about current and future state.

Symantec, on the other hand, knows that if they’re ever going to sell Norton Whizzbang for OS X, they’re going to need to prime the pump. So what better marketing method than have some empty threats leveled specifically at OS X – though Ubuntu, OpenBSD, VMS and BeOS are just as susceptible to Phishing threats for that matter. (Of course, they won’t be marketing Whizzbang for those other operating systems.) Just put on your PR facepaint, do the FUD dance and scare yourself up some “hacker clouds” on the horizon. I feel Like Duke again, but later on in the book:

Bad waves of paranoia, madness, fear and loathing, intolerable vibrations in this place. Get out! The (marketing?) weasels were closing in. I (can) smell the ugly brutes!

This is at least the second case of security professionals using OS X as a PR tool to make headlines. Which, of course, is ironic. If there’s any group of consumers who are skeptical of vendor claims, it’s security professionals. We know that Controls are only so good. We know there’s no “silver bullet.” In article after article and to our stakeholders and data owners we parrot nice phrases like “security is a people problem” and “there’s no silver bullet.” Yet time and time again, we allow vendors to make (and brake) those promises to us. “We stop zero day threats with proactive protection and zero false positives.” Or, even worse: “Product X is the silver bullet” — an actual claim quoted to us by (let us say) someone who knows better.

Complicating the problem are so-called reviews and independent certifications. It’s happened to all of us. We have a dog of a product that doesn’t work — the manual seems more fitting to keeping a “Hello Kitty” Tamaguchi alive than actually troubleshooting an installation, the vendor’s support group can’t even get the product to display a “Welcome” screen, and when it does start, the product either catches fire or explodes packets all over the place, somehow taking down the phone system in the works. It’s that point we open up our free security magazine and they’re giving the product with smoke coming out of the fan vents a “5 out of 5 star rating” with a recommendation badge. It’s a stupid game. As a former product manager, I can tell you that I’d have rather marketed a firewall with Marcus Ranum’s “Apparently OK” certification, as fabricated and absolutely fictitious as it was. It would have put our product under more scrutiny than certain other well known firewall product certification programs.

At the end of the day, it needs to stop. And we’re the ones that need to stop it, dear readers. Not being the type to complain and run, let me offer some ways we can foster accountability.

  1. Never take a magazine rating at face value. In fact, tell magazine reviewers that they need to put a metric up: “How long the sales engineering team spent at our lab trying to get the stupid thing to start right.”
  2. If a vendor wants to put a demo in, tell them you’d love to look at their product, but would like to invite a few friends. Have your friends put together a collection of the products minimum required hardware and a testing harness — then have the vendor put their money where their marketing is and demo in front of your entire local ISSA chapter using the hardware and testing environment you provided. If they can’t get the product up, running, and tested in a morning (and buy burritos for the ISSA)….
  3. Watch your demo contracts. I know of one Fortune 500 that *had* to license the product for the enterprise because they demo’d too long, and the tricky vendor had specific clauses in their contract.
  4. When appropriate, tell the vendors you don’t buy from that one of the reasons you’re not giving them the PO is because they use misleading marketing.
  5. ALWAYS perform rigorous risk analysis on how the product will actually reduce risk to the organization before buying. I know of several cases where a FAIR risk analysis showed no real business NEED for a vendor’s technology – technology that was supposed to be “Best Practices” and every other similar company had dished out hundreds of thousands of dollars for.

Using Premises #4 and #5, I won’t be buying Norton Whizzbang any time soon.

Expecting an Increase in Threat Event Frequency (TEF)

Where there’s smoke, there’s fire?

We’re seeing some very interesting information these days. Symantec jsut released their report for the year (Internet Security Threat Report). And other researchers are warning us about new studies and their findings (this one mentioned on Dark Reading, and these 3 from Emergent Chaos).

Dear Readers, I’ve taken the time and effort to read these — just for you and your benefit :) And just what did I find out? Short Answer: probably very little that your own internal rapid cognition hasn’t already.

Mainly: losing laptops is bad and will get worse; Phisihing is popular; most attacks are focused on individual people and their personal computers/accounts and not organizations; and if you’re still using “falling stock price” as a rationale for security spending — your FUD dance is going to stop working (if it hasn’t stopped scaring executives already).

And though I have specific problems with each report, I want to thank everyone who contributed to them. This sort of data makes it much easier for us to model and analyze existing risk.

For example, the Symantec report gives great data on “vulnerability” (sic) discovery and time until an exploit appears. I think time-until-exploit is one of our most important external metrics to keep an eye on, and am thankful for that kind of data. The better we know time until exploit for various categories of weaknesses — the better we can predict Threat Event Frequency (TEF).

XSS, though a hot topic, has very littel real data surrounding it. However, there’s enough existing data to for us to realize that if the Fortune 500 — with arguably significant risk and resource to combat it — are as susceptible as the media makes them out to be, we ought to be scanning, testing, and building risk models that take into account a dramatic increase in TEF for cross site scripting.

What about Laptops? If you haven’t already, it might be time to perform a little asset/loss management and come up with expected TEF and Loss Event Frequencies (LEF) for management and data owners.

But do take a read, let me know what you think of these new data sources. And if you have analysis — please share!

Practical Security, Theoretical Exploits, and FAIR use.

Having lunch with an analyst about to take FAIR training the other day, he mentioned this to me.

"I love the whitepaper. I love FAIR, the models, consistency, all of it. The one thing I worry about is how practical it will be for me in my day to day job. I mean, if I’m not head down in some logs or scanner output, I’m talking to a vendor or my boss. I know I’ll be doing formal analysis sometimes, but I just don’t know how much."

It’s a very valid question – how practical is FAIR? I’ve spent part of my past week discussing the meaning of vulnerability with other professionals. At one point the discussion was accused of taking a "if a tree falls in the woods" direction. But when I had heard that question from my analyst friend, I had to laugh.

The same analysts, after understanding FAIR and it’s application, often accuse FAIR of being too practical!

Let me give you an example. Recently there’s been no little amount of discussion concerning Apple, SecureWorks and some wireless drivers. Inflammatory remarks by both sides have left this a somewhat acrid discussion. For those not familiar, SecureWorks researchers claimed to have found vulnerability in Apple’s wireless drivers that led to complete compromise. The Washington Post decided to headline the whole affair. Initial controversy was created because SecureWorks claimed to have given Apple notice, and Apple claimed to have not been provided with a working exploit. Long story short – for various possible reasons, the general public has yet to see an actual exploit as SecureWorks demonstrated "in the wild".

To most analysts, this is of critical interest, if not importance. This is evident by the amount of discussion surrounding it – as we techies say, there’s been a lot of cycles spent on this already. It’s got all the ingredients to scare us to death – ownership of the root account, apparent speed with which the laptop is compromised, the ubiquitousness of wireless, the fact that it’s wireless in and of itself is enough to give every self-respecting security pundit the heebeejeebies…

But to the FAIR-trained analyst, it’s time to move on. This issue shouldn’t even be on our radar. And once I’ve explained why, you’ll understand why some folks might think that FAIR is too practical.

We know that we can only have a Loss Event when we have a Threat Event to which we are Vulnerable. I In non-FAIR lingo; someone’s gotta attack us, and get by our controls in order for us to have any problem. Now given the claimed nature of this exploit, we know we will have very little Control Strength – a fault in the wireless drivers means that there are very few countermeasures we can use against this potential exploit. There’s even a high Threat Capability rating – as quickly as the security researchers seemed to take over the MacBook in question, it still takes someone within the very best of the most technical community to act on us. No doubt, our Vulnerability here is pretty high.

However, outside of a fuzzy (and to some, questionably valid) video – no real exploit of this has yet to be seen. Now of course this doesn’t mean the problem doesn’t exist! If valid, there may be a number of folks out there that, through their own research, have added this exploit to their bag of tricks. What it does mean is that it has yet to really hit mainstream, or even that exploit of this weakness is still very theoretical.

So our Threat Event Frequency (TEF) approaches zero. And despite our high vulnerability, a TEF near zero means near zero Loss Events. Ladies and Gentlemen, until we have intelligence of a working exploit, it’s time to move on to more pressing issues.

In the same way, my analyst friend will never look at his scanner output the same. A database somewhere will be telling him about the criticality of the "vulnerability" (sic) the scanner found. That criticality has nothing to do with risk. In fact, it has very little to do with actual Vulnerability until the found weakness is put in context of Control Strength and Threat Capability. And the point at which my analyst friend has a "critical" scanner finding which FAIR tells him is a non-issue is the point at which he will likely come back to me and claim that FAIR is too practical. It’s the point at which most analysts (myself included) would rather lie within the safety blanket of "everything theoretically critical" than practical risk analysis. We sometimes call it "possibility vs. probability" – and it’s one of the biggest inefficiencies of modern Information Security.

note: Just because we don’t have TEF yet doesn’t mean that this won’t ever be an issue we need to deal with. Like all things technical, this has a lifecycle – we may just be too early in that lifecycle to worry about it, or it may never have a significant enough lifecycle – but for right now I’m betting that most people not employed by Apple or SecureWorks have bigger issues to worry about.

also note: We didn’t even get to the loss side of the risk equation. If you’re organization has little to lose on their laptops – then even a working exploit in the wild might make this issue "no big deal" for immediacy.

also also note: If you want to get bent out of shape over the way Apple or Secureworks handled this, great! Vendor/Researcher relations is a different matter, and to me – more "impractical" to our jobs than a working definition of vulnerability :)

When Analysts Disagree – The Benefit of a Framework for Risk

WPN writes in comment to our "ROI" discussion:

…just read the FAIR whitepaper, and I like it a lot. It’s both thorough and relatively easy to understand, even for a Risk N00b like myself. (The bald tire analogy is brilliant.)

But what still makes me uneasy is the qualification (we won’t say quantification) exercises that still rely solely on the analyst’s professional judgement. Your estimate of a Threat Event Frequency might be completely different from that of a co-worker. Add to that the fact that we all have a certain amount of denial going (”it’ll never happen to US”), and you still have a potential argument with your executive board.

The only way I’ve been able to get close to filling that gap is by adding historical data and threat intelligence reports. For a given estimate, I can say, “This is based on what kinds of events we’ve seen in this organization historically, what others in our neighborhood have seen, and what industry analysts are saying is happening *right now.*” It’s not just my opinion versus that of the CISSP down the block; it’s got a little bit more objective weight to it.

Le ROI est mort; vive le ROI! ;-)


Great observations regarding the challenges associated with quantitative estimates in the absence of solid quantitative data – i.e., a heavy reliance on “the analyst’s professional judgment”. A couple of things to consider are:

  • If one analyst’s judgment of, for example, the Threat Event Frequency in a scenario is different from another analyst’s, FAIR’s framework provides a basis for discussion and (usually) resolution. In fact, there are two directions you can go in troubleshooting differences in estimates:
  • The analysts can step down one or more layers in the framework to see if they can identify where the disagreement stems from. Do they disagree on Contact frequency or the probability of Action? If the disagreement lies there, what’s the basis of the disagreement? This always leads to great discussion, and generally leads to agreement.
  • The analysts also can run the analysis using one estimate, then the other. Generally, one of the outcomes will be much more realistic than the other, which often leads to agreement on which estimate was more accurate.
  • Differences also are reduced when ranges are used for the estimates, rather than “precise” estimates (e.g., an estimate of between 1 and 10 times per year, rather than an estimate of 6 times per year).
  • Often, discussion between analysts reveals that a disagreement stems from a difference in how the analysts have defined some component of the scenario (e.g., threat community) that they’re basing their estimates on.

Troubleshooting estimates won’t always lead to agreement, but my experience has been that it almost always does. Either way, having a clear and consistent framework for analysis tends to significantly reduce the amount
of denial or paranoia that often inhabits an analyst’s judgment, and provides a basis for resolving disagreements.

Consider, too, that qualitative estimates suffer the same challenge. One analyst may estimate a value as, for example, “Medium” when another analyst rates it “High”. In this case, however, the problem is made worse by the fact that they may have in mind roughly the same underlying quantitative value, but be working from a different “internal scale” (i.e., one person’s “high” may be another person’s “medium”). And once you explicitly define your qualitative scale quantitatively (to resolve this problem), you cross the line into quantitative analysis.

Bottom line – data’s great when it’s available (and when it’s valid…). But in the absence of good data, decisions still have to be made in as logical and rational a manner as possible. Models like FAIR provide a logical and rational framework for better understanding and analyzing a problem space, and making better informed decisions.

ROI? Girl, Don’t Even Go There!

Do come back here and read this post, but first — don’t walk, RUN! over to Layer 8 and read Wendy’s post on “Alternative to ROI”.

Second great post in a row by Wendy, in this one she covers many of the problems I hear on a regular basis concerning security spending and justification. I’d like to build on Wendy’s excellent post by explaining the FAIR approach.

One thing we never wanted to do on the blog was pitch RMI product. However, I will mention to those of you who know FAIR but haven’t taken the RMI training, the Management Level Course discusses exactly how to go after “business case” or as Wendy calls it, effectiveness of security investment. I thought today I’d reveal a little of what lies behind the corporate curtain in that regard.

One of the most amazing things about FAIR is that once you have a working foundation/taxonomy for risk, there’s no end to practical use and application — especially as you have analysts and engineers, for the first time, giving real data and metrics to management. Combine that data with inference-based statistical modeling, and suddenly there’s a whole new set of information that decision-makers have to work from. Wendy breaks down performance-based categories as follows:

  • maintaining security infrastructure (capital spending on hardware, software and maintenance)
  • maintaining compliance with existing requirements (legal, compliance and “best practice,” whatever that is)
  • remediation (show me an organization that doesn’t spend on remediation, and I’ll show you a shop that’s been open)
  • less than a year, or is going to be open for less than a year
  • training and awareness
  • developing solutions for new compliance and business requirements
  • incident response

And then adds the problem of unpredictable costs to the equation. These are very good places to start. In FAIR, Jack has identified 3 management categories under the Risk Management Landscape within which Wendy’s categories would fall:

Program Management — in which decision, planning, and execution measurement functions live.

Loss Event (Controls) Management — in which we identify the elements of our control framework, their lifecycle and measure control assurance.

Object (formerly Asset) Management — in which we identify the objects we need to control, and measure the performance of our lifecycle management.

The diagram is much too complex (and a little too much our IP) to show off here, but you can probably quickly deduce how each Management discipline the security organization is responsible for — feeds data into FAIR, and then FAIR allows us to “structure” our data in a way that we understand (using risk and not just control effectiveness) to create information for each other category to use. For example, Loss Event Management gets data from Object Management, processes it using FAIR, and then gives that information to Program Management who uses FAIR to create new data in making decisions or measuring execution – which effects Object Management, which gives data to Loss Event Management, who processes it anew using FAIR, and then gives that information to Program Management, repeat cycle….

For Wendy’s investment effectiveness predicament, Object and Loss Event Management data, when produced and harvested the right way, can give security management the information they need to talk to the data owners about the value of security, and what we’ve called just about everything but ROI, including business case or cost-benefit analysis. We can move from just measuring effectiveness of the spending in the past, to a point where we understand our position in the control lifecycle and model how effective the control may be in the future — and most importantly if more money will be needed to maintain that level of effectiveness. But most importantly, we stop saying, “well if you give use half a million dollars more, we’ll be, ummm, much better” and start saying “by spending $500,000 we expect to reduce losses of $800,000 over a 3 year period.” Which might smell, sound and even look similar to ROI, but as much as I’d like to say it is, it just isn’t.

Also relevant to Wendy’s post is how good risk management principles can help data owners understand the need for that unpredictable spending. Given the proper FAIR risk analysis, data owners are presented with amount and frequency of loss that they can expect in their current state, and a solution set to choose from to reduce their risk to acceptable levels. FAIR in this regard is used by Fortune 500 F.I.’s very successfully.

What’s most important to me of all of this is that you understand that risk analysis — or, if you like, enterprise risk assessment — should not be a monolithic event that happens every 18 months, but something you do on a daily basis.

Possibilities Abound

If you haven’t checked out Matasano Chargen blog you should really add it to your RSS reader of choice. Two posts there caught my attention today.The first one is really great. It points out one of my peeves, the grandstanding of consultancies to overblow findings in a “study” as a PR tool. In this case, a company called “Klocwork” released a study of the Mozilla.org software, essentially damning the software on the basis of finding a number of issues within the code base. It hits slashdot as:

OMG, 611 Defects, 71 Vulnerabilities Found In Firefox!!!!!

Under closer analysis, it looks like 2-3 of them are actually really that relevant as “vulnerabilities” and even then one wonders what the risk is to end users because of them. As Matasano points out, this might just backfire on Klocwork – as now we know their software analysis tool is prone to over-reporting.

Great Job by Matasano, very good post.

And usually their writing is right on — as I said, I read every post and always find useful stuff. However, in another post, Matasano, while rightly pointing out that people should keep up to date on their software uses, tends to be the crutch we all use, overhype of vulnerability over risk (note that this is completely different from Klockwork – Matasano is being the Good Samaritan – Klockwork is using possibility as a PR tool). I don’t know directly if it’s one of the “2-3″ Klocwork found (I tend to doubt it) but “Mozilla’s independent implementation of RSA and X.509 also fails to validate signatures properly.”

“Philip Mackenzie and Marius Schilder of Google informed us of Daniel Bleichenbacher’s recent presentation of a common implementation error in RSA signature verification, a failure to account for extra data in the signature. For signatures with exponent 3 it is possible for an attacker to calculate a value for this extra data to make an altered message appear to be correctly signed, allowing the signature to be forged. Mozilla’s Network Security Services (NSS) library was vulnerable to this flaw.”

Matasano adds after the above paragraph is quoted: “The impact of this advisory: People can forge SSL certificates to unpatched Firefox. Get your mom to upgrade right now.”


“Mom? Hi, it’s me. Listen, you need to drop whatever you’re doing and go upgrade Firefox. “What’s Firefox?” It’s the browser you use. Go to Mozilla.org and download the newest copy. Well, there’s the possibility that someone could interrupt your e-commerce session to Williams Sonoma and then steal your credit card or something. Um, OK — if you must know — “For signatures with exponent 3 it is possible for an attacker to calculate a value for this extra data to make an altered message appear to be correctly signed, allowing the signature to be forged.” How likely is this to happen? Well, not very. You have a better probability of being hit on the head with a cast iron frying pan by a dyslexic Norweigan Mime, but upgrade anyway.”

Ok, seriously. As Matasano points out, we should all upgrade. I’m not trying to pick on them, we’re all guilty of doing what Matasano has done – I have had my in-laws scared to death of the computer and Internet for years now.

Also, don’t misunderstand me, I think vulnerability discovery and reporting is a great thing. The problem comes when we excite people for possibility over probability (don’t even get me started on this one).

When we, as a profession, make these statements, we should be examinig if we can be perceived as crying wolf. As security men and women, we do it by nature, and for good reason. However, we usually don’t realize the impact it has on our audiences. This is very true for corporate security, as well. If we continue to scream “GAH!!! Possibility!!!” at every meeting, eventually when it comes time to really get people enthused at patching – we risk being ignored.

This is particularly true with C&A/Project Management processes. Security groups weilding the probability club tend to eventually become one of two undersireable personalities – either the Authority weilding dictator, or a speed bump on the road to business development.

Our image today is from rainbowpromotions.org. If you’re interested in hiring the poinsettia mime, give them a call! From their site:

Poinsettia Mime is “Blooming For The Holidays” and offering both thirty minute stage performances as well as strolling entertainment. Celebrating the magic of the season her program contains such antics as the candy cane soft shoe and building a snowman. Wonderfully comic and full of audience interaction!

Military Strike Force, or F-Troop?

It’s been a slow week for InfoSec/Risk news.

However, one of Richard Stiennon’s articles last week started some discussion, and I thought it worthwhile to reply. His article: “Is there really a need for business cases?” likened an IT Security group to a “Tactical military strike force.” The crux of his contention is that security (and therefore IT Risk Management) should not be treated as a “business process” — but should be more like “fighting a battle.”

Now I can’t disagree with him wholesale. But thinking that there’s a spectrum within which a pendulum of fad swings between security as a business function and security as, um, whatever the whim of the security team wants is, at best, myopic. At worst, it’s an overreaction that causes security and its “cause” within an organization to suffer.

“Security is much more akin to fighting a battle than it is to ‘aligning business objectives’” is an astute observation. But to deny the importance (and even the necessity) of risk management in achieving the strategic objectives of an organization is plain folly.

Securosis.com does a fairly good job of answering Richard’s article, but at the risk of taking the comparison too far, let us imagine how a tactical strike force that disregards processes, and “alignment with objectives” might operate on the field of battle.

I offer the following imagined discourse between the leader of the Tactical Strike Force, Sergeant Syso, and his superior, General BizObjective(I envision Graham Chapman and John Cleese in the roles). The scene takes place in the strategy tent of General BizObjective prior to a significant battle:

General BizObjective: Sergeant Syso, you and your men are going to be needed in this upcoming battle. Now I was thinking that you could…

Sergeant Syso:
With all due respect Sir, I have reviewed the battlefield myself and my men and I will be taking Hill number 1534!

General BizObjective:
Well Syso, that’s all well and good, but I’m not sure that Hill 1534 is an objective of value in this campaign.

Sergeant Syso: Sir! My team has taken hills in the past, and many of them were the exact size of 1534. Furthermore, the Chief of Staff suggests that taking such hills is a key function of tactical strike teams, Sir!

General BizObjectives: Listen here Sergeant, I’m sure if you take 1534, that the Chief of Staff will be happy with the efficiency and effectiveness with which you take the hill, but I think I’d rather you assist Sales Platoon in their thrust into map objective 2935, Mobile Connectivity. I believe that by taking Moble Connectivity, we can better fight this battle and, ultimately the war.

Sergeant Syso: General, Sir, I insist that we take the hill 1534. Furthermore, it is well known that all good military manuals discuss the importance of the high ground and my commandos are just the outfit to take that hill! In fact, if you don’t allow me to take that hill – I know that we will not be following the suggested operating procedure as documented by our War College.

General BizObjective: (Wearily) Fine, Sergeant. You and your men take 1534, when you’re done with your conquest of the hill, please rejoin the rest of the army at map objective 2935, where we will actually be fighting a real battle.

Sergeant Syso: Thank You Sir! However, in order to take the hill, I will need to requisition special supplies!

General BizObjective: (Now visibly tired of the demands of his Sergeant) Special supplies?!

Sergeant Syso: Yessir! A Submarine!

General BizObjective: A Submarine? To take a Hill?

Sarget Syso:
Yessir! It’s well known that my counterpart in our Alliance has a submarine at our disposal, and to take hill 1534 I will need a submarine.

“Right, Stop that! It’s silly! Very Silly indeed!”

And it’s just as silly to ignore the duty of security organizations to communicate effectively with those they protect and those who employ them.

In all seriousness — of course you make a business case for security. If you can’t, then you shouldn’t be employed. End of story. It’s like a discussion I had recently surrounding what is a cost-effective control. Someone offered that a control was “cost effective” only if it was cheaper to operate than acquire. This, of course, is nonsense. I might buy that a control is cost effective if it protects more value than it costs to acquire and operate it, but for us, dear reader, it’s important to note that you cannot ignore the risk decision to implement or not implement said control any more than you can ignore the market reasoning behind that definition of “cost effective.”

Without business case, hanging onto best practices (operating procedure as documented by the War College-above) and compliance (the threat of the Chief of Staff) will only buy you so much credibility in the organization. And, in fact, it will only buy you as much credibility as “best practices” and “compliance” have with the executive council or board of directors. Unless you speak “risk” — unless you use risk management and risk analysis to enable empower help the business get it’s job done — you’ll keep failing to understand how you can best help your data owners, and spend your time looking to magazine articles and vendors for the next big club to smack people over the head with in order to persuade them of your value.

And after a while, people get tired of being hit with clubs.

images courtesy of google searching, and belong to the people they belong to!

Controls, Risk and Role

Picture and Story from Engadget. The new smart radar signs can now tell you what your license plate number is as you speed by them. That’s really handy for me because I’m always forgetting my plate number; if I need a reminder I can just floor it as I pass one of these things!

Hope our readers in the US all had a great holiday. I have to say that I’m guilty of not thinking too much about Risk this weekend. But this morning brings interesting news:

Wireless Legislation

Gov. Arnold seems likely to sign into law this bill passed by the CA Legislators. Sometimes, we’re just too removed from the general bulge of the population distribution to understand — are there really people out there who still don’t understand that wireless needs better controls? And if the problem is that we need better controls, is it worth making vendors put pamphlets in the boxes, or could we just ask the vendors to spend the money on making sure we have nice, interoperable security?

Anti-Virus Cage Match!

A Greek lab has performed a study on effectiveness for the various anti-virus vendors.

Anticipating Increased Threat Event Frequency and the Browser Wars

A guilty pleasure of mine is reading up on browsers. Compatibility with standards, features, etc. I don’t know why. My grandmother collects Blue Willow ceramics — I have 5 browsers on my laptop. Some folks have been insinuating that the latest IE release is nothing more than a tactic to prevent adoption of Firefox based on feature parity. However, someone in Redmond is at least trying to innovate in the browser space. Microsoft has built something like IPS for the browser.

BrowserShield promises to allow IE to intercept and remove, on the fly, malicious code hidden on Web pages, instead showing users safe equivalents of those pages.

More interesting to me:

It could also include additional features. Wang said the research team built its prototype to support add-ons for securing AJAX (Asynchronous JavaScript and XML) applications and to block things such as phishing attempts.

Now the cynic in me wonders if BrowserShield will conveniently block Google’s AJAX. But I do believe that what we’re seeing here is a smart move by Microsoft to anticipate the security needs of consumers in the future. It’s popular among prognosticators these days to predict that the future incident du jour will be direct attacks using malware to get credentials. Kudos to Microsoft for anticipating the need and building features. I’m not being cynical or sarcastic when I wonder alloud if this is a first for them. Either way, it’s good to see them being proactive.
A note about BrowserShield. Should it re-write the HTML or just block the site completely? Currently, if my site wickedly intends to inject you with Malware, BrowserShield will just re-write the HTML to be benign. Should you allow the user to go to the site at all?!

Another interesting project from the same group is Strider GhostBuster, “a rootkit scanner that looks for stealthy forms of malware.” This would actually be of benefit, IMHO, as well (Although shouldn’t it be labeled an “adminkit scanner”). Let’s hope it sees the light of day.

There is something interesting here, though:

The BrowserShield project—the brainchild of Helen Wang, a project leader in Microsoft Research’s Systems & Networking Research Group, and an outgrowth of the company’s Shield initiative to block network worms.

Should Security be a piece of Systems & Networking, or should there be a Risk Management R&D group independent of those that acts as oversight? Adam might know more than we do on the subject (of course he might not be able to tell us), but one of the interesting things I see browsing the Microsoft security blogs is how fractured the security initiative seems to be. This is an organizational issue that I think even non-technical enterprises need to address — is Security overlay, or is it bolted onto networking? Knowingas I do a few large companies, I see swings back and forth, maybe due to personality, maybe due to individual ability to manage, but it’s an interesting philosophical debate:

should security be consolidated into a group with overlay and authority, or decentralized into functional groups with data owners making the final risk decisions?