CVSS Review

  • I recently had the privilege of being a guest on the Securabits podcast and, during the session, was asked about other frameworks.  I mentioned CVSS (Common Vulnerability Scoring System) in my answer and said I thought it had some serious problems as an analysis and measurement tool (however I also said there were good things about it).  Given time constraints, I didn’t go into detail in the podcast about what I thought was good or less-good about CVSS.  That’s what this post is about — to clarify and share my thoughts regarding CVSS (version 2.0).

    In the interest of keeping this post to a manageable length I’ll constrain my observations to what I believe are the most important strengths and weaknesses of CVSS.

    First, I have to acknowledge that what NIST and CMU have tried to accomplish with CVSS is both admirable and difficult.  I can only imagine the debates that must have taken place during its development regarding tradeoffs that needed to be made in order to come up with a practical result.  I also believe there’s value in CVSS, even as it is today.  That said, like any other model or framework there’s always room for improvement.  More importantly, like any other tool, its limitations should be well understood so that decisions based on it are made with both eyes open.

    What CVSS aims to be

    The CVSS guide mentions three key benefits the framework is intended to provide:

    • Standardized vulnerability scoring — essentially, a common means of measuring “vulnerabilities”.  I think the framework accomplishes this objective for technical vulnerabilities because it does, in fact, provide a standard against which technical vulnerabilities can be scored.  Enough said.
    • An open framework — i.e., a framework where scoring includes rationale so that the results don’t have to be accepted on blind faith.  As described further on, I think the framework hits this target in some respects, and misses completely in others.
    • Risk prioritization — i.e., a means of understanding the significance of vulnerabilities so that they can be compared and, thus, prioritized.  Here again, in some limited respect CVSS accomplishes this objective.  Overall though, as a CISO or other decision-maker, CVSS would not provide me with the information I need to make well-informed risk decisions.

    An open framework

    Great idea — a framework where justification is provided for the scores/measurements being used.  And for the variables a user makes choices about within CVSS (e.g., Exploitability) there is some basic descriptive rationale in the selection matrix.  Unfortunately, CVSS equations are also chock-full of weighted values, none of which appear to have clearly documented basis.

    For example, the Base Equation multiplies Impact by 0.6 and Exploitability by 0.4.  In other words, someone decided that Impact was always 20% more important than Exploitability.  What’s the rationale for that?  In fact, by my count there are five weighted constants in the base equation alone.  Six more weighted values (eleven total) if you include the fact that each Base metric will be given a value that appears to be arbitrarily assigned (e.g., For Confidentiality Impact the score will be 0.0, 0.275, or 0.660 depending on whether the vulnerability is assigned “None”, “Partial”, or “Complete” for that metric).  The other CVSS equations use weighted values in a similar fashion.  Perhaps there are well-documented and thought-through rationale for each of these, but I haven’t found them.

    In my experience weighted values are rarely well-justified.  Furthermore, they tend to be very sensitive to specific conditions/assumptions.  For example, someone might argue that strong authentication is a more important control than logging.  After all, “an ounce of prevention…”   Consequently, it might be tempting to “weight” authentication’s value higher than logging.  Unfortunately, the logic breaks down if the scenario is focused on privileged insiders as the threat community — i.e., people who are supposed to have access.  In that scenario strong authentication isn’t a relevant control at all and logging is much more important.

    Unless there’s good rationale for weighted values, they introduce ambiguity, limit the scope of where the analysis can be applied, and can in some cases completely invalidate results.  At the very least, if weighted values are going to be used, some well-reasoned rationale should be provided so that users can make an informed choice about whether they agree with the weighted values.

    Effective risk prioritization

    As a decision-maker, two of the fundamental inputs to any decision are “What’s the likelihood/frequency of bad things happening?” and “How bad are they likely to be if they do happen?”.  These are the two values that, taken together, provide me with the loss exposure information I need in order to prioritize effectively.  So, in order for CVSS to be an effective aid in risk-informed prioritization it has to provide useful information on both of those parameters.

    CVSS tries to hit both targets, but falls short.  With regard to frequency/probability of loss, CVSS focuses on the likelihood of attacker success from a couple of different angles, but never addresses the frequency/likelihood of an attack occurring in the first place.  Without that metric, the likelihood of attacker success simply does not provide enough information for me to understand the frequency/likelihood of loss.  CVSS may be trying to address the likelihood of attack through its Access Vector metric which, it could be argued, implies that the farther away an attacker is from the target, the less likely an attack might be.  No argument with the logic (if that is in fact what the metric is supposed to represent), but there are a lot of assumptions built into that, including an assumption that the attacker isn’t an insider.

    From a loss magnitude perspective, the Base Metrics include Confidentiality, Integrity, and Availability references but these are actually measuring something pretty different.  In a longer post at a later date I might describe a way in which these CVSS metrics could be used in a very interesting way, but that would make this post WAY too long.

    CVSS’s Environment Metrics try to include additional loss magnitude considerations.  Besides being very qualitative, there appear to be some significant logic flaws in the approach.  For example, the Target Distribution metric is essentially a measure of “surface area” (i.e., how many systems could be affected).  One problem with this is that there are many scenarios where a single critical or highly sensitive system/asset is exposed (i.e., a small Target Distribution) but gross exposure exists.  The way CVSS math works, this exposure would be unaccounted for.  Something else to keep in mind is that Target Distribution is also a key consideration in loss event frequency (it may be even more important there in many respects), which isn’t accounted for at all in CVSS.

    Setting aside the points above, prioritization of CVSS ratings against anything outside of CVSS isn’t practical because CVSS uses an ordinal scale.  You can’t usefully compare something that was measured on a 1-to-10 ordinal scale against something that was measured in monetary values or, for that matter, in a different 1-to-10 scale.


    I’ve blogged before about the problems associated with using math on ordinal scales, so I won’t belabor the point here.  Suffice it to say that it just doesn’t stand up to scrutiny.  That said, if the user recognizes that the results are pretty much meaningless for anything but comparing one CVSS value against another, then I guess no harm, no foul.

    Bottom line

    For all I know, the people who put CVSS together already thought through all of this (and the other problems within CVSS that I haven’t talked about here) and decided that what they came up with was the only practical result given the constraints they faced and their objectives.  Nothing wrong with that.  Trade-offs are inevitable.  It is important though, for users of the tool to have a realistic and accurate understanding of its capabilities and limitations.

    CVSS seems like a decent way to measure and compare technical deficiencies (“vulnerabilities”) against one another from a “(Very roughly) how much weakness does each vulnerability introduce relative to all of the other vulnerabilities measured using CVSS?” perspective, which can be useful information.  What it doesn’t provide is meaningful information about how these vulnerabilities stack up in the bigger picture — i.e., “How important are these vulnerabilities relative to the other concerns I have to consider spending resources against?”  In other words — “How much do/should I care about the findings?”  In order to be useful in answering these questions, CVSS would have to evolve considerably.

    Speaking of evolution… RMI has on the drawing board a potential alternative to CVSS that we believe will be both practical and more effective in characterizing the risk associated with vulnerabilities.  Stay tuned!

    Posted on


    1. Jesper Jurcenoks Feb 11

      Please join the First CVSS SIG so that we can work with you on your concerns.

      Jesper “JJ” Jurcenoks
      Active member of the CVSS sig under

    2. Jack Jones Feb 12

      @ Jesper

      Thanks for your suggestion. I’ll give that some thought. Off the cuff, I have some concerns based on working with teams of subject matter experts on projects like this in the past: 1) the changes I would propose to CVSS are significant, 2) because the changes would be so substantial, working with an established team like the SIG can be extremely difficult because of existing inertia, and 3) having that many cooks in the kitchen tends to result in a lot of compromises that water down the result. Of course, on the plus side, it can be a significant benefit to having the experience and perspectives that the team brings to the table. I’ll talk to the people I’m already working with on the effort and see what they recommend.


    3. Phil W Feb 22

      The point you raise about absence of likelihood and that CVSS provides a common or baseline comparison. This is precisely the reason why I believe that the CVSS scoring is very effective.

      One of the biggest challenges is obtaining sources reliable data to determine the likelihood. I will give an example:

      Scenario 1 – You set up a stall on the street with a loaded pistol and a sign saying “shoot me”.

      Scenario 2 – You take the same set-up into a prison.

      The threat and impact are the same in both scenarios; the gun (vulnerability) if used would cause loss of life (hehe – availability or a life ‘outage’?). The likelihood may vary or it may be the same, since there are no reliable source of data we cannot determine the likelihood (has anyone tried this before?) or where we make ‘educated’ guesses these may unduly influence the result. I would suggest irrespective of the probability, this level of threat shouldn’t be tolerated.

      Of course, the scenarios could be seen as being analogous to having unpatched critical services.

    4. Jack Jones Feb 22

      Hi Phil,

      A few thoughts regarding your points:

      1) You are correct that impact is the same in either scenario — love the “life outage” term. Going to have to remember that one!

      2) “shouldn’t be tolerated” is an opinion that would have to include other factors. Obviously, it’s hard to imagine a set of circumstances where it would be reasonable or rational to do something like your gun scenario. Fortunately, (or unfortunately) the situations we’re faced with in security are rarely that black and white. The outcomes are rarely life or death and there’s almost always the question of prioritization amongst many other issues/scenarios. This prioritization is where probability of an event occurring is a critical piece of information.

      3) Regarding probability and your scenario — are you setting up your booth on death row or in the cell block containing forgers, pick pockets, and embezzlers? True, in either case (just as on the street) there is the possibility of a life outage, but clearly the odds are lower in one case versus another.

      4) Related to the points above, another consideration is that, in the real world, organizations put themselves in positions of risk for a purpose — e.g., “We have Internet-facing servers because Internet commerce is a key component of our business strategy.” Hard to imagine what would drive a person to set up the gun scenario you mention. As a result, the gun scenario may not be great analogy for the real world.

      5) For virtually any real world scenario, there are other key factors in play such as resource constraints where we have to recognize and work with the fact that organizations also need to deal with opportunity development, operational expenses, and a multitude of non-infosec risk issues. Here again, choosing requires comparison, comparison requires measurement, and measurement requires that we include factors like probability of the event occurring in the first place.

      Your point regarding data is often true. There are many scenarios where data are limited. That said, there is actually a lot more data on the threats against technical vulnerabilities than our profession seems to recognize. We’re just lousy at leveraging it. Furthermore, even when data are sparse, calibrated subject matter expert estimates are proven time and again (ref Douglas Hubbard’s work) to be significantly better than shooting from the hip or ignoring the issue altogether.


    5. Rahul Hada Mar 31

      Thank You JJ for providing us such a good post..
      from past few months I am searching for the source of values which are their in the Metrics of CVSS but not yet get success ,it will great if you could suggest me some good read for values generated in CVSS.
      i have gone through and CVSS guide but i didn’t got any clue

      Again Thank You



    6. Neil HB Mar 22

      Great post as usual.

      The point about Risk is well made. My view is that you cannot hope to compute risk when armed with just a single factor (even if you have split it into a number of metrics). True risk is not determined in that way.

    7. Jack Mar 23

      Great point Neil. And it’s even more problematic if the manner in which you carve up the single factor is fundamentally flawed.


    8. Jake Feb 27

      The Open Security Foundation (OSF) and Risk Based Security wrote an open letter to FIRST regarding the upcoming Common Vulnerability Scoring System (CVSS) version 3 proposal. While we were not formally asked to provide input, given the expertise of managing vulnerability databases, along with the daily use of CVSS, we felt the feedback would provide valuable insight to improve CVSS in the future.

    1. Tweets that mention CVSS Review | --
    2. Meeting Risk Targets Would Cut Deaths GloballyBig Online News | Big Online News

    Leave a reply