List of Potential Future CVSS Improvements

Last updated: 2018-06-20

A list of potential improvements targeted at CVSS 3.1 and CVSS 4.0 has been created based on input and feedback from various sources.

Dates are in ISO 8601 format, i.e., YYYY-MM-DD or YYYY-MM.



List of Potential Improvements for CVSS 3.1

Should configuration dependency be considered an Attack Requirement?

How should we handle default configuration, common configurations, and unsupported configurations? Improve the CVSS standard to better explain how to score if configuration is a factor

For example, if a managed switch has a vulnerability in SNMP v1, but the switch ships with this disabled in the configuration and almost no customers change this, how should it be scored? If we know that 99% of customers do enable it, does that change the score? If 99% of customers enable it but the documentation explicitly tells them to never do this, does that change the score?

Provide guidance on vulnerabilities in libraries (or similar)

The current CVSS guidance is difficult to interpret for vulnerabilities in code that provides functionality to other code, but which cannot run by itself. Libraries are an example. As such libraries cannot run without other code, scoring just the library results in CVSS 0 because there's no way an attacker can access it. An approach to generating a more useful CVSS score is to guess how third party code commonly calls the library, and score for the reasonable worse case. This is easier for libraries like OpenSSL that are typically used for a defined purpose, in this case securing network communication, but it is less simple for other types of library. For example, should we assume that a library that performs image conversion is used by programs that accept images from untrusted sources over a network and pass them to the library without validation checks? I think the answer is "yes", but it would be good to have guidance in the docs to explain the preferred approach.

Provide guidance on handling a vulnerability that has different Base Scores on different platforms

A vulnerability may have different impacts on different platforms. For example, an application may detect and use all the low-level protections that an operating system provides, so the impact maybe lower on operating systems that offer more protections. "Low-level protections" refer to functionality such as Address Space Layout Randomization (ASLR) and Data Execution Prevention (DEP). Some vendors already have systems for providing different Base Scores in such situations, but if vendors commonly need to do this, it makes sense for the CVSS standard to suggest how to do it to provide some conformity.

Potentially also consider the suggestion made by John Lento in the CVSS SIG meeting held 2016-05-26 that we provide guidance on handling vulnerabilities that have different Base Scores because of differences in other components. Examples discussed in the meeting or the associated email thread include:

Potentially also consider the suggestion made by Penny Chase and Steven Christey from MITRE discussed their work on potentially using CVSS to help assess the risk of vulnerabilities related to healthcare hardware, software and firmware in a meeting on 2016-07-21. One potential requirement is for a medical device manufacturer to release details of multiple mitigations to a single vulnerability, allowing affected users of the device to decide which mitigation is best for them. CVSS can help support this by allowing a score for each mitigation. Is there something we can add to the standard to help this? Maybe just a statement that environmental scores can be used in this way is enough?

Should Impact Metrics be based on the authorization scope of the vulnerable product, or just the vulnerable product?

Should a vulnerability in an application running within the authorization scope of an encompassing component be scored relative to the vulnerable product, or the authorization scope containing the vulnerable product? Most SIG members interpret CVSS v3.0 as specifying the former, but there is some confusion on this, and disagreement on whether this is the best approach. The current approach yields a more appropriate score for vulnerabilities that have a large impact on a product that is an unimportant or very small part of a larger system. The alternate approach avoids problems when the vulnerable product can optionally be configured to use authentication, e.g., web servers which can authenticate web users, which is sometimes difficult to score under the current system. It may also be easier to explain.

Since the introduction of Scope in CVSS v3.0, it is important to view CVSS scores in the context of the vulnerable product. For example, unrelated vulnerabilities in unrelated products that happen to have the same Base Score should not be assumed to represent the same level of risk to an organization. This has been true since CVSS v3.0, but it is likely worth spelling out in the docs.

When impact to Vulnerable Component is greater than impact to Impacted Component, should Scope be Changed?

Under v3.0, the Scope metric indicates if there is an impact beyond the Vulnerable Component. If there is such an impact, Scope is Changed. This is true even if the impact to the Vulnerable Component is greater than the impact to the Impacted Component.

For example, a particular vulnerability may completely compromise the Vulnerable Component and have a minor and temporary impact on the Availability of connected systems. Under v3.0 rules, Scope is Changed; and Confidentiality, Integrity and Availability are set based on the impact to the Vulnerable Component.

While discussing a related work item, it was clear SIG members either did not know about this rule (in the User Guide), interpreted it differently or thought it was wrong. This work item is to establish how this case should be scored, and revise the wording to be clearer.

Privileges Required: split reconnaissance vs attack phases of an exploit as they may require different privileges

Discuss whether it is worth differentiating the privileges an attacker needs to perform reconnaissance that must be performed before launching an attack, to the privileges needed to perform the attack. For example, a vulnerability may require an attacker to login using a legitimate account to determine the username of an administrator, before launching a brute force attack against that account that does not require any privilege to perform.

Define the term “successful attack”

As well as defining the term "successful attack", clarify if it includes cases where an attack impacts the vulnerable component, but not in the way the attacker planned. For example, the attack aims to execute code on a remote system, but crashes the remote system instead.

Formula rounding problems

JavaScript's implementation of the formula can lead to rounding errors that give unexpected results. Benjamin showed that particular values of Environmental metrics can lead to a formula of the form (1 - (1 - 0.22)), which we expect to be 0.22, but which JavaScript calculates as 0.21999999999999997. As the CVSS formula rounds this number down, this tiny difference can lead to a 0.1 difference in the score. We need to determine whether we need to add a lot more explicit rounding in the JavaScript implementation, and maybe also make this clear that this is required in the Specification Document.

CVSS v3.0 Calculator in Spreadsheet Form

Albert E Wilson emailed first-sec on 2015-10-01 requesting "that the CVSS calculator be made available again in spreadsheet form." He says "again" because a reference calculator was released with CVSS v1.0.

Fix any identified formula errors or unexpected behaviors

In at least the following case, raising an impact Base metric lowers the environmental score, which is incorrect:

Define "Exploitable", "Affected", and "Vulnerable" in the User Guide's glossary

A minor edit, as per the subject. It is only worth doing if we use the terms elsewhere in the documentation.

Allow CVSS to be extensible by third parties — The CVSS Extensions Framework

Several opportunities exist to leverage the core foundation of CVSS for additional scoring efforts. For example, a proposal was presented to the CVSS SIG to incorporate Privacy into CVSS (CPVSS) by overlaying combinations of CVSS base and environmental metrics to derive a Privacy Impact. Rather than reusing existing metrics for related/orthogonal purposes, defining a standard method of extending CVSS to include additional optional Metric Groups while retaining the SIG-approved Base, Temporal, and Environmental Metrics would address the needs of not only privacy, but safety, automotive and healthcare.

Delta change in CVSS Impact Metrics should be scored based on final C/I/A impact

A previously approved work item entitled, "Clarify that we only score increases in impacts", removed ambiguity surrounding how to score delta impact, based on increase in access, privileges gained, or other negative outcome as a result of successful exploitation. One inferred ambiguity remained about what final base metric value should be assigned to Confidentiality, Integrity, and/or Availability, should the attacker start with some ability to impact C/I/A prior to exploitation.

This minor work item will provide additional text to clarify that, when scoring a delta change in Impact Metric, that the final impact should be used. For example, if an attacker starts with partial access to restricted information (C:L) and successful exploitation results in total loss in confidentiality (C:H), then the resultant CVSS base score should include the "end game" impact (C:H).



List of Potential Improvements for CVSS 4.0

Distinguish attacks available only on specific networks (VPN)

Explore adding a metric value to Attack Vector to distinguish attacks that are exploitable only on specific networks, e.g., within a corporate intranet, versus anywhere on the Internet.

Improve measurement of Exploitability and Exploit Code Maturity to reflect the potential of a vulnerability being exploited

Ideas include:

Measure physical outcome of an exploited vulnerability

Measure whether a successful exploit can cause physical damage and/or harm to living beings. Max Heitman uses the term "kinetic impact". People working with medical device vulnerabilities would like to see the addition of a measurement like this.

Measure up-stream and/or down-stream impacts

Explore a new metric to measure the direct up and down stream impact of an impacted component, perhaps restoring some aspects of Collateral Damage Potential from CVSS v2.0 to Environmental Impacts. Alternatively, maybe Asset Value, or "Safety" is better. Target Distribution metrics may take us into "risk assessment" territory, but that is severity on a meaningful scale, the specific severity of one single instance of a vulnerability on one system is academic.

Change the Exploit Code Maturity metric so that the Not Defined value is the same as Unproven, not High

For RC and RL one assumes a valid report and no remediation available and the Temporal score lowers if report confidence drops or a fix becomes available.

For E, current and proposed measurement is for any vulnerability by default (ND) to assume the equivalent of High exploitability and lower the Temporal score if exploitability found to be less than High.

While this is a safe/conservative approach, it seems to me to be inverted. Most vulnerabilities are not exploited, that is to say there is no evidence of exploitation. Those that are exploited (or are likely to be) stand out as much higher priority to address. So I'd suggest that the default (ND) for E to be equivalent to U, and evidence of exploitation should raise the Temporal score.

Replace the Exploitability metric with a new metric named "Likelihood of Attack"

Art Manion sent an email to the CVSS list on 2015-03-16, with a Word document explaining the new "Likelihood of Attack" metric. See LikelihoodOfAttack.docx for details.

Move the Exploitability metric to the Base Metrics and remove the remaining Temporal Metrics

Seth Hanford sent the following email to Max Heitman on 2015-09-17 detailing his previous suggestion:

IMO, exploitability is something that could be measured in base. The other metrics don’t really matter.

If you did that, you could simplify things a lot. Temporal existed because there was a push for “things the vendor can track and update” and “things that other people will track and pay attention to” and in reality, that seems to have not really panned out.

The major downside to this is that exploitability becomes something that vendors have to track / assign. It also would push people to keep scores updated / need to have some source of authoritative scores. I’m not sure that’s bad, but I can see it being a topic of some resistance. Additionally, a move like this would probably be best done with some consideration to whether or not CVSS should really be tracking exploitability. Does it make sense to track something which isn’t about Severity, but is more about Urgency?

People use CVSS to prioritize, but we’ve headed more in the direction of describing vulnerability potential energy or vulnerability capability. Still, with hundreds of vulns out there, some of the most effective measurement is “Is there a metasploit module?” and if so, you can patch. So… that makes a fairly strong argument for tracking some kind of exploitability. That said, should it simply be “is this being exploited in the wild or in an exploit automation / framework?” in which case it could be a binary decision.

Re-introduce the Collateral Damage and Target Distribution metrics

Garret Wassermann added the following on 2015-09-18:

Collateral Damage (CDP) might be good to reintroduce, and could be tweaked/renamed to be a "Safety" (S) metric. I think this will be important as we see more car and medical device vulns in the wild. The vuln itself doesn't directly hurt people (only abuses the software/system), but may result in human injury. It strikes me as something important to include to boost the score to Critical territory whenever human injury/life is involved, even if the vuln is on its own pretty weak. This might be a good modifier in the Environmental metrics, or perhaps it should even be part of the Base metrics.

Target Distribution (TD) ("how widely deployed is this") also seems good to add back. After working some scores internally at the CERT/CC, the CVSSv3.0 Environmental metric in practice does not provide much additional information and seems to degenerate to just copying the same content from the Base metrics. To be more useful, the Environmental metric should include some unique metrics. Having TD for the Internet or within an organization seems useful for modulating a score and helping to prioritize.

Art Manion edited the wiki on 2016-04-18 to add the following comment. It has been moved from the list of topics to here:

TD does get in to "risk assessment" territory, but that is severity on a meaningful scale, the specific severity of one single instance of a vulnerability on one system is academic. Another possible new vector, even more down the risk assessment path, would be something like "asset value."

Integrate CWSS metrics into current CVSS metrics (ITU proposal from Iran)

As part of the ITU standards process for CVSS v3.0, Iran submitted contribution T13-SG17-C-0541!!MSW-E.docx. It makes several suggestions, including the incorporation of Common Weakness Scoring System (CWSS) metrics into CVSS.

Add a way to baseline a set of vulnerabilities to the underlying component so their severity can be compared

Chris Betz verbally suggested, during the CVSS SIG meeting held on 2016-03-10, that CVSS provides a way to score vulnerabilities relative to an underlying component. This was the way all vulnerabilities were scored under CVSS v2.0, but the addition of Scope in CVSS v3.0 means we now score relative to vulnerable & impacted components. This makes it hard to compare vulnerabilities with different vulnerable & impacted components. Adding a way to score the impact a set of vulnerabilities on an underlying component they all share, such as an operating system, allows their severities relative to one another to be determined.

Art Manion writes: An example of this, in Microsoft terms, is IE on "workstation" vs. IE on server. The same vulnerability in IE has significantly different severity due to differences in platform (default configurations of IE). Could be thought of as scoring CVSS on a combination of (CVE + CPE) instead of just a CVE.

Distinguish Active versus Passive User Interaction

During the CVSS SIG meeting held 2016-03-30, Chuck Wergin suggested adding an additional metric value to User Interaction to capture whether a victim needs to actively participate in the attack. A couple of examples not discussed at the meeting but included here in an attempt to clarify:

The User Interaction metric could have values None, Passive and Active, with None resulting in the highest Base Score.

Add a concept of "survivability"

Jim Duncan emailed the list on 2016-01-22 and made the following suggestion:

The only thing remaining that would enable us to replace our internally developed scoring solution we use for products in development would be baking in the concept of "survivability". Namely how hard is it to mitigate issues in the field? (Documentation, Remote Patch - Easy, Remote Patch - Hard, Recall). I'm hoping to see that in the next iteration of CVSS V3.

Add a metric for "Wormability"

During the CVSS SIG meeting held 2016-03-03, it was suggested that we could add a metric to indicate if a vulnerability could be exploited via a worm, the justification being that we are more likely to see attacks for such vulnerabilities, so they present a more severe threat.

Add Threat Intelligence metric to Temporal Metrics

During the CVSS SIG meeting held 2016-04-28, Dale Rich suggested adding a metric allowing threat intelligence information to be included as a Temporal metric, if such information is available.

During the CVSS SIG meeting held 2016-09-08, Dale Rich suggested the metric could allow a scorer who has access to a thread intelligence feed that rates vulnerabilities on a 5-point scale to include this metric in the Temporal metric group. The scale ranges from no attacks taking place to lots of attacks being seen (and/or exploits being included in many exploit and malware kits).

Art Manion: Strongly agree. Would cover "wormability" issue above, mild suggestions for "Threat" vs "Threat Intelligence" and consider an even number of slots if possible (e.g., 4 point scale).

Review how CVSS can better reflect the risk of a vulnerability being exploited

Sasha Romanosky sent the following in an email on 2016-04-28, which was briefly discussed during the CVSS SIG meeting:

First, I’ve been corresponding with a fellow who used to run the Verizon DBIRs. He has since left, but I was asking him about the claim that 85% of the breaches were caused by something like 10 different vulnerabilities. He mentioned that this claim came not from Verizon data, but from another company who was able to analyze network traffic. The fellow who did that analysis is Michael Roytman, and I’ll be talking to him today before our call. I wanted to just [chat] with him to get a better understanding of what they have and don’t have. Now, in addition to doing that work, he also had some harsh words for cvss. Specifically, he suggests that using cvss as strategy to fix vulnerabilities is a bad way to go because those vulns which are actively exploited (and which therefore pose the greatest risk), are not necessarily those scored as a 10. [Fair point, and my reaction to the claim is that, “fine, this isn’t exactly the point of cvss anyhow.”] His short paper is available at https://www.usenix.org/system/files/login/articles/14_geer-online_0.pdf . Second, I understand that the most recent DBIR has just come out. And with it, OSVDB’s hating on it. Their comments are here, https://blog.osvdb.org/2016/04/27/a-note-on-the-verizon-dbir-2016-vulnerabilities-claims/ , and it does identify these most common vulns, which aren’t necessarily the ones we’d expect. Some of these are fair points, but mostly it’s funny to hear different people just wanting to get mad at other people.

Remediation Level: Add a way to track the source of vulnerability information

Add a way to track the source of vulnerability information, especially for Temporal metrics. For example, was threat intelligence information released by the product vendor?

This information would probably not form part of the CVSS score or Vector String.

Bring back Target Distribution

CVSS v2 had an Environmental Metric called Target Distribution used to measure the proportion of vulnerable systems, represented as an environment-specific indicator in order to approximate the percentage of systems that could be affected by the vulnerability. This metric was removed in CVSS v3, and while most/all scoring providers (read: vendors) never published Target Distribution, some scoring consumers and coordination centers used it regularly.

v3 Environment is focused on the specific instance/target (e.g., modified base scores, security requirements, database server A holding cafeteria menu vs. same database server software, different instance (server B), holding credit cards). Entire network environment vs. local single machine/group environment.

Publish a Risk Analysis BCP document

CVSS is exclusively a measurement of the severity of a vulnerability. However, many new incident response teams struggle with how to use the CVSS score as input to an overall risk assessment. Many mature (P)SIRTs have well-worn models for taking CVSS, along with Exposure and Threat, and calculating a Risk score. While outside the scope of CVSS, it may be very helpful for new consumers of CVSS to have a SIG-approved Best Current Practices (BCP) document on some ways organizations use CVSS as an input to risk analysis. Furthermore, this may also address criticism over how CVSS is "insufficient" in scoring the realized impact of vulnerabilities, since many critics are still trying to morph CVSS into a risk calculator without taking into account other attributes of the vulnerability or environment.



Accepted or Rejected Proposals

CVSS3.x_AV_Clarification: Within Attack Vector, expand Adjacent and clarify other definitions [v3.x]

Reword Attack Vector references to the OSI Layer model to make it easier to understand for non-network geeks. Expand Adjacent to include logically adjacent or trusted networks (MPLS, VPNs, etc.). Provide guidance in the User Guide on using Modified Attack Vector Environmental Metric when resources are exclusively behind a firewall.

Provide guidance for using Security Requirements in Environmental Metrics Group [v3.x]

The current Specification and User Guide do not give sufficient guidance on choosing Security Requirements metric values. The proposal is to add text, primarily to the User Guide, to fix this. Not yet incorporated.

Split some aspects of existing Attack Complexity into a new Attack Requirements metric [v4.0]

This is a substantial change targeted for CVSS v4.0.

The scope of Attack Complexity will be reduced to reflect the exploit engineering complexity required to evade or circumvent defensive or security-enhancing technologies. The new Attack Requirements metric will reflect the prerequisite conditions of the vulnerable component that make the attack possible.

Remove naming confusion in Environmental Score formula

The Environmental Score formula is confusing as it reuses a name used for another formula:

10) Please take the following comment as a constructive comment: I think it is confusing to have the Modified Impact Sub Score include as part of its formula another Modified Impact Sub Score, even though another formula is given to calculate this score. To me, it is easy to get confused when talking about the Modified Impact Sub Score that is the result of the formula and the Modified Impact Sub Score that is an input into the equation, even though we have a separate formula.

Clarify that we only score increases in impacts

Update the CVSS Specification to clarify that only impacts that increase as the result of a success exploit of a vulnerability should be scored. Specifically, impacts that an attacker can cause before exploiting the vulnerability should not be included in the CVSS score. For example, imagine a vulnerability that can only be exploited by a user with a "backup" privilege that allows the user read access to all data on a system. The vulnerability allows users with the "backup" privilege to escalate privileges to a user with the ability to read and write any data on the system. This should only be scored as an Integrity impact as the user could read all data without the vulnerability being present.

The CVSS specification should make clear that an attacker is assumed to have detailed knowledge of any software and hardware that he/she can readily obtain and study before the attack, or for which detailed knowledge is readily obtainable. This does not extend to site-specific data or configuration, such as cryptographic keys or IP addresses. A statement of this type is needed so scorers don't score Attack Complexity as High purely because they believe it is difficult for an attacker to understand the target system. We assume that the attacker has ample time to perform this work before launching an attack, and it should not influence the score.

Optionally, we could extend this by stating that an attacker is assumed to have detailed knowledge of all software and hardware, including bespoke code that is unique to a site, e.g., a person attacking facebook.com is assumed to have knowledge of code that is specific to that website. This is basically Kerckhoffs's principle, commonly quoted as "one ought to design systems under the assumption that the enemy will immediately gain full familiarity with them". However, it is usually much harder for an attacker to obtain site-specific code, so we may not wish to go this far.