Common Vulnerability Scoring System: Frequently Asked Questions (FAQ)

This page updates with each release of the CVSS standard. It is currently CVSS version 4.0, released on November 1, 2023. If you wish to use a specific version of the Frequently Asked Questions document, use:


CVSS logo

Common Vulnerability Scoring System v4.0: Frequently Asked Questions (FAQ)

Also available in PDF format (519KiB).

Document Version: 1.1

The Common Vulnerability Scoring System (CVSS) is an open framework for communicating the characteristics and severity of software vulnerabilities. CVSS consists of four metric groups: Base, Threat, Environmental, and Supplemental. The Base group represents the intrinsic qualities of a vulnerability that are constant over time and across user environments, the Threat group reflects the characteristics of a vulnerability that change over time, and the Environmental group represents the characteristics of a vulnerability that are unique to a user's environment. Base metric values are combined with default values that assume the highest severity for Threat and Environmental metrics to produce a score ranging from 0 to 10. To further refine a resulting severity score, Threat and Environmental metrics can then be amended based on applicable threat intelligence and environmental considerations. Supplemental metrics do not modify the final score, and are used as additional insight into the characteristics of a vulnerability. A CVSS vector string consists of a compressed textual representation of the values used to derive the score.

This document provides answers to frequently asked questions regarding CVSS version 4.0.

The most current CVSS resources can be found at https://www.first.org/cvss/

CVSS is owned and managed by FIRST.Org, Inc. (FIRST), a US-based non-profit organization, whose mission is to help computer security incident response teams across the world. FIRST reserves the right to update CVSS and this document periodically at its sole discretion. While FIRST owns all rights and interest in CVSS, it licenses it to the public freely for use, subject to the conditions below. Membership in FIRST is not required to use or implement CVSS. FIRST does, however, require that any individual or entity using CVSS give proper attribution, where applicable, that CVSS is owned by FIRST and used by permission. Further, FIRST requires as a condition of use that any individual or entity which publishes scores conforms to the guidelines described in this document and provides both the score and the scoring vector so others can understand how the score was derived.

Introduction

During the development of version 4.0 of the Common Vulnerability Scoring System (CVSS), many excellent questions and comments were provided during the public comment period.

As a supplement to the CVSS version 4.0 Specification Document, User Guide, and Examples Document, this set of frequently asked questions (FAQs) and answers has been collected in order to aid in the adoption and understanding of the new standard.

Resources & Links

Below are useful references to additional CVSS v4.0 documents.

Resource Location
Specification Document Includes metric descriptions, formulas, and vector strings. Available at https://www.first.org/cvss/v4.0/specification-document
User Guide Includes further discussion of CVSS v4.0, a scoring rubric, and a glossary. Available at https://www.first.org/cvss/v4.0/user-guide
Examples Document Includes examples of CVSS v4.0 scoring in practice. Available at https://www.first.org/cvss/v4.0/examples
CVSS v4.0 Calculator Reference implementation of the CVSS v4.0 equations, available at https://www.first.org/cvss/calculator/v4.0
JSON & XML Data Representations Schema definition available at https://www.first.org/cvss/data-representations
CVSS v4.0 Main Page Main page for all other CVSS resources: https://www.first.org/cvss

Frequently Asked Questions

Is there a prescribed way to use CVSS Base and Environmental metrics to score a vulnerability along a long supply chain?

Answer - No.

What’s the difference between Attack Complexity (AC) and Attack Requirements (AT)?

The concept of Attack Complexity (AC) in CVSS v3.1 has now been expanded into two base metrics: Attack Complexity (AC) and ATtack Requirements (AT). The differences between Attack Complexity and Attack Requirements can best be understood as:

Why did you use “AT” for Attack Requirements, instead of “AR”?

AR was already being used for Availability Requirements. The Specification will not allow two metrics to have the same abbreviation.

What are the boundaries between a Vulnerable System and Subsequent System?

For the purpose of determining Vulnerable System versus Subsequent System impacts, keep in mind these factors:

  1. When thinking about the Vulnerable System, consider the scope of where the vulnerability exists and from which it will be exploited; the factors for AV, AC, AT, PR, and UI will be determined based on this. If, for example, a vulnerability exists in a specific service or module which is part of a larger unit, and is not able to stand on its own, then the scope of the Vulnerable System would almost certainly be the containing unit.
    A concrete example: an application delivered by a vendor contains multiple services, one of which contains a vulnerability; if the specific service is not independent of — or to put another way, is integral to — the larger collection of services, the vulnerability would be reported against the application, and the Vulnerable System would likewise be the application.

  2. Subsequent Systems encompass anything that is not the Vulnerable System and is impacted in some way by the vulnerability in the Vulnerable System.
    Using the application/service example above, if the service can be separated from the rest of the components (e.g., the vendor may supply both, but one is not integral to the other), the scope of the vulnerability would be reported as within the service consumed by the application. In this example, the Vulnerable System would be the service rather than the application. If the application’s security is impacted from the vulnerability in the service, the application would be among the Subsequent Systems affected by the vulnerability.

    Be sure to check out the Examples Document for variations of how Vulnerable System and Subsequent Systems are identified and assessed.

Consumers are encouraged to consider more than just the simple numerical score and understand the full context and nuance of a vulnerability’s capabilities and impact. Also, there is important and valuable information that can be provided to the consumer that does not impact the severity of the vulnerability. For example, a denial of service (DoS) vulnerability that immediately recovers (R:A) has the same numeric score as a DoS vulnerability that leaves the system unavailable until an administrator restarts the system (R:U). However, the length of service impact may differ greatly and can be considered by the consumer when planning remediation resources.

Another example is a Border Gateway Protocol (BGP) networking vulnerability that forwards the malicious update and then crashes (AU:Y), causing a cascade outage vs. the same vulnerability that crashes without forwarding (AU:N). The technical severity is the same, but the scope of service impact is far greater for an automatable vulnerability.

Think of Supplemental Metrics as an extended vocabulary for conveying information about the extrinsic characteristics, or outward impact, of a vulnerability beyond just the technical severity. If we consider the CVSS v4.0 vector as a "vocabulary" of vulnerability assessment, the more information a supplier can provide the consumer, the more decisions can be made with complete situational awareness. Much like the expansion of the Impact Metrics, allowing suppliers to provide full disclosure on the impact of a vulnerability, Supplemental Metrics provide additional extrinsic characteristics of the vulnerability (e.g., wormable, difficulty of mitigation, objective vendor urgency, etc.) that can be used when deciding which High or Critical severity vulnerability to patch first.

It’s also important to distinguish the difference between the assessment (assignment) of the Supplemental Metrics by the scoring provider, versus the assessment (response plan) of the consumer. Product suppliers assess the additional characteristics of the vulnerability and assign applicable Supplemental Metric values. Product consumers review the provided Supplemental Metric values and assess whether any modifications of their vulnerability response (e.g., mitigation priority) is warranted.

What is the guidance on what action to take for the various values of the Safety (S) Supplemental Metric: Negligible (N) and Present (P)?

When a Safety metric value of Negligible has been assessed, it means that whomever did the assessment believes that there should be no concerns about potential operational safety impact unless the vulnerable device has an obvious potential impact to human safety based on how it is being used in the consumer’s environment. A Safety metric value of Present, it means that whomever did the assessment believes that there is at least some potential for an operational safety impact in most, if not all instances of its deployment. The context of the system and the consumer’s concerns with respect to the use of the system may suggest very different guidance for different situations.

Additionally, what is meant by “consequences of the vulnerability meet definition of IEC 61508 consequence categories of ‘marginal,’ ‘critical,’ or ‘catastrophic.’”?

The consequences of the vulnerability refer to potential Safety impacts. IEC 61508 provides some handy references for understanding potential safety categories. A snapshot of a table from Wikipedia1 on this subject is shown below:

Consequence Categories

Category Definition
Catastrophic Multiple loss of life
Critical Loss of a single life
Marginal Major injuries to one or more persons
Negligible Minor injuries at worst

In the above table, you can see the same consequence categories that have been introduced in the description of Safety metrics. This version of CVSS is focusing on considering a safety impact that either doesn’t (Negligible) or does (anything more severe than Negligible) create a potential safety concern.

What is the guidance on what action to take for the various values of the Automatable (AU) Supplemental Metric: No (N) and Yes (Y)?

If exploitation of a vulnerability is automatable by an attacker, you should consider a higher priority for mitigating or fixing the vulnerability.

An answer of Yes to Automatable is aligned with the vulnerability being wormable,2 that is, the vulnerability can be used by a self-replicating attack that can spread at machine speed. Because this reduces an defender’s available time between receiving reports of active exploitation and attacks against their own systems, consider mitigating or remediating any Automatable vulnerabilities as soon as there is a public proof of concept exploit.

What is the guidance on what action to take for the various values of the Recovery (R) Supplemental Metric: Automatic (A), User (U), and Irrecoverable (I)?

The spirit and intent of adding Recovery a Supplemental metric was for consumers to consider the post attack scenario and assess the actual impact.

Recovery metric describes the method of recovery of a component or service once an attack has been carried out.

Automatic Recovery (A) : This metric describes that the system or component can recover by itself within no time. Eg: A line card in a networking device could reboot and restart on its own , without causing any service intervention. This is the same as a daemon restarting upon an attack, automatically. In these case, the downtime is almost negligible.

User Recovery (U) : This metric describes that the system or component cannot recover on its own, but requires a human intervention. Eg: A manual restart due to persistent memory leak leading to a Denial of Service on the system.

Irrecoverable (I) : This metric describes that the system or component is beyond repair and cannot recover on its own. Eg: CPU releases smoke due to colling fan malfunctioning or any hardware failure that needs a replacement.

Note that User (U) represents actions possible by an operator, consumer, or end customer. The recovery of a Component/System includes service restoration in terms of availability and performance. Auto recovery is limited in scope to the component itself.

What is the guidance on what action to take for the various values of the Value Density (V) Supplemental Metric: Diffuse (D) and Concentrated (C)?

Value Density is primarily a property of the system which contains the vulnerability, rather than the vulnerability itself. Therefore, Value Density is less likely to be useful for a software vendor or maintainer who is prioritizing work within one software product. Anyone that is prioritizing vulnerability actions among work that includes multiple products (such as a manager at a large supplier deciding among projects, a coordinator, or any software consumer or system owner) should consider mitigations and remediations to systems with Concentrated Value Density as a higher priority than those with Diffuse Value Density. Concentrated Value Density systems are a higher priority to mitigate or remediate because they are a higher priority for adversaries to attack.

What is the guidance on what action to take for the various values of the Vulnerability Response Effort (RE) Supplemental Metric: Low (L), Moderate (M), and High (H)?

The intent of the Vulnerability Response Effort (RE) Supplemental Metric was for consumers to use that information to plan out their mitigation deployments and the resources that would be needed.

What is the guidance on what action to take for the various values of the Provider Urgency (U) Supplemental Metric: Clear, Green, Amber, and Red?

It may be helpful to think of the Provider Urgency as the “low order bits” of the severity rating. For example, given two unauthenticated network-based denial of service vulnerabilities (CVSS-B 8.7), the vulnerability identified by the supplier (eg. vendor) with a higher Provider Urgency may be worth considering being remediated first.

Cases where a supplier may set a higher than neutral Provider Urgency include vulnerabilities that don’t qualitatively reflect the criticality of the issue, given its CVSS-B score. Conversely, cases where a provider may set a lower Provider Urgency include vulnerabilities that result in a subjectively higher CVSS-B score than the product provider would have assessed, for example, in their public security advisory.

Note that suppliers may assess a Provider Urgency that aligns with the Qualitative Severity Rating Scale3, neither raising nor lowering the base remediation priority.

While any provider along the product supply chain may provide a Supplemental Urgency rating:

Library Maintainer → OS/Distro Maintainer → Provider 1 … Provider n (PPP) → Consumer

the Penultimate Product Provider (PPP) is best positioned to provide the most direct assessment of Urgency.

Additionally, why were TLP colors chosen, rather than simply Low, Moderate, and High?

The intent of the supplemental assessment Provider Urgency was to provide additional insight into the vulnerability's impact. Care was taken to limit this metric to specifically augmenting the assessment, rather than overriding the CVSS-B score. Had linear values such as Low, Moderate, and High been selected for Provider Urgency, consumers may have misinterpreted it as an overruled assessment of the severity, invalidating the methodically calculated CVSS-B score.Answer

How did CVSS v4.0 address the issue of scores being too clustered toward High and Critical severity ratings?

The perception that CVSS v3.1 scores were clustered toward the Critical and High ratings was not a problem the CVSS SIG was intending to solve in v4.0. It is natural for suppliers to focus their efforts to assess, announce, and provide fixes for the most severe vulnerabilities they identify in their products. As a result, more vulnerabilities are rated higher on the severity scale than others.

It is the responsibility of the consumer to apply Threat and Environmental data into the assessment of the vulnerabilities (preferably using automation) to reduce the scores of the vulnerabilities that are not as important as others. For example, vulnerabilities that are not being exploited in the wild nor have proof-of-concept code publicly available will realize a significant reduction in the CVSS-BTE score. Exploited vulnerabilities will maintain their higher scores allowing the consumer to focus on what is important.

How does the value of Attack Complexity (AC) or Attack Requirements (AT) vary with the introduction of compensating controls (e.g., WAF, CSP, etc.)?

This is related to the application of Environmental Metrics into the vulnerability assessment. Please refer to “How does a consumer apply Environmental data into a CVSS Assessment?” below for more details.

Which value of User Interaction (UI), Passive (P) or Active (A), should be used for Reflected Cross-Site Scripting (XSS) or Stored Cross-Site Scripting vulnerabilities?

Reflected cross-site scripting requires that an individual click a specific link to trigger the exploit. That individual has the choice or opportunity to review the link prior to interacting with it. A conscious decision is made to interact with the payload, so this would be considered Active.

Stored cross-site scripting does not require a specially crafted link to trigger the exploit. An individual browsing a website could inadvertently trigger the exploit through standard browsing behaviors and would have no awareness or ability to avoid the payload prior to exploitation. Because there isn’t a conscious decision to interact with the payload, this would be considered Passive.

Prior to clicking a link, an individual has the choice or opportunity to review the link. Because the individual is making a conscious decision to interact with the payload that is delivered via a link, this would be considered Active.

Which system is considered the Vulnerable System or the Subsequent System when assessing Reflected or Stored Cross-Site Scripting (XSS) vulnerabilities?

In a cross-site scripting vulnerability, the web application that contains the injection point is the Vulnerable System. A user’s browser is the Subsequent System.

Which system is considered the Vulnerable System or the Subsequent System when assessing SQL Injection vulnerabilities?

Different scenarios may exist depending on the configuration of the applications in use. Our example CVE-2023-30545 is simple, where the web application that contains the injection point, the SQL application, and the direct impact of exploitation are all contained on a single system without subsequent system impact.

Other common scenarios include distributed system configurations in which a SQL database application that contains a vulnerability may be the Vulnerable System, and other business applications that rely on data within that database may be impacted by exploitation of the vulnerability, and would be assessed as a Subsequent System.

“It is the responsibility of the consumer to populate the Threat Metric Exploit Maturity (E) based on information regarding the availability of exploitation code/processes and the state of exploitation techniques.”

Why is this the responsibility of the consumer and not the provider (vendor)?

This is a common question that is not unique to CVSS v4.0 but is applicable to all past versions.

If this responsibility were to be placed with the supplier/vendor, it would require ALL suppliers/vendors to have an accurate and real-time understanding of the Exploit Maturity for every vulnerability that organization has ever announced – ever. Obviously, this is not realistic and not reliable. However, this is exactly what threat intelligence sources and organizations do best. There are many free and subscription-based threat intelligence feeds available to the consumer that will allow them to apply that data to the maximum number of detected vulnerabilities in their environment.

Why does the application of Threat data only reduce the resulting CVSS Score?

This is a common question that is not unique to CVSS v4.0 but is applicable to all past versions. At first glance, this concept appears to be counter-intuitive. But there are several reasons for it that promote maturity in a Vulnerability Management (VM) program.

For a moment, consider the most dangerous vulnerabilities. If they had a base score of “10”, there’s no room to escalate the severity with threat intelligence or environmental criticality data. On the other hand, if they maxed out with a lower score (like “8” to leave room for escalation based on threat), there are even more negative side-effects starting with the fact that there would be no such thing as a Critical vulnerability without the application of threat.

When vendors score their vulnerabilities for publication, they are assuming the “reasonable worst-case scenario” as per the specification. This means while performing the assessments on newly announced vulnerabilities, providers must assume that all vulnerabilities will be exploited and fully weaponized by malicious actors. It is the responsibility of the consumer to apply available threat intelligence and determine where each of their detected vulnerabilities falls in the Exploit Maturity scale. For those vulnerabilities that ARE exploited and weaponized, their scores will remain high. For those that are not exploited and don’t have Proof-of-Concept (POC) code publicly available, those scores will fall and become less important.

How does a consumer apply threat intelligence data into a CVSS Assessment?

This is a common question that is not unique to CVSS v4.0 but is applicable to all past versions.

For Threat, the key is collecting and applying threat intelligence data to your vulnerability scan data using automation.

Reliable threat intelligence sources are much easier to find now than they were 10 years ago. There are dozens of inexpensive (or free) places to go to get this information in bulk or and even more if you are able to pay for a subscription service. As you would expect, not all threat intel sources are equal – and none are perfect. It is recommended to use MANY different sources of threat intelligence and combine the results before applying it to your scan data.

When you have your vulnerability threat intelligence, it should tell you where each CVE is on the threat scale. If your threat intelligence sources are not using the CVSS Exploit Code Maturity values, you may have to translate that data to be usable. When you have machine-readable threat intelligence, reconcile that data with the CVEs identified in your vulnerability scan data.

Also, it is important to remember that intelligence changes all the time. Therefore, threat intelligence data must be re-collected and re-applied to your scan data frequently (daily updates are recommended). When these jobs are automated, this maintenance becomes much easier.

How does a consumer apply Environmental data into a CVSS Assessment?

This is a common question that is not unique to CVSS v4.0 but is applicable to all past versions.

The Environmental Metric can be divided into 2 groups

  1. “Security Requirements” which includes Confidentiality Requirements (CR), Integrity Requirements (IR), and Availability Requirements (AR) and represent the criticality of the vulnerable device.

  2. “Modified Base Metrics” which, as the name would suggest, allows the consumer to adjust the values for any of the CVSS Base Metrics as appropriate for their environment.

    It is important to understand that the Environmental Metrics are all attributes of the VULNERABLE SYSTEM. They have NO relation to the vulnerability itself.

    It is highly recommended that consumers of CVSS perform an assessment of the devices on their network and document the results in a database (such as an Asset Management Database).

    Security Requirements: Using the “Security Requirements” section of the User Guide as a guideline, the assessors will document CR, IR, and AR values in the database.

    Modified Base Metrics: Additionally, other attributes might be gathered during this assessment that can be applied to the Modified Base Metrics. This section can be more unclear since the possibilities are nearly endless. Specific configurations and security controls in place in each consumer’s environment will be different. However, here are some examples of how this can be used.

    Example 1: If some systems in a consumer’s environment are isolated from the Internet (i.e.: no inbound or outbound connections), a maximum Attack Vector (AV) value of “Adjacent” can be applied to all vulnerabilities identified on those systems.

    Example 2: If a device has obvious potential to impact human safety (such as a bluetooth insulin pump or application that organizes first responders), sufficient impact to Integrity or Availability could result in the application of “Safety” as a subsequent impact.

    Example 3: A web application is protected by a Web Application Firewall. Attackers attempting to exploit those vulnerabilities will find those attempts much more difficult. Therefore, an increase in the Attack Complexity (AC) would be appropriate.

    Once the assessment is complete and documented, the consumer should reconcile the database with the vulnerability scan data and apply these attributes to all vulnerabilities found on each system.

Is EPSS or SSVC a replacement for CVSS?

Additional scoring systems have been recently introduced and adopted to handle complementary aspects of vulnerability assessment and patch priority. These are welcome additions to the vulnerability scoring toolbox, providing innovative exploit prediction and decision support.

How should the Attack Requirements metric be used with regards to system configuration?

Attack Requirements should not be used to denote uncommon or unique configuration requirements that put a system into a vulnerable state. “Execution conditions or variables of the vulnerable system” in the specification document refer only to challenges faced by an attacker in achieving successful exploitation, and not the possible configuration values that render the system vulnerable. When uncommon or unique configuration is required to put the system into a vulnerable state, producers can include those details in the vulnerability description.

The assessor should assume vulnerable configuration, as documented in CVSS v4.0 Specification Document 2.1 Exploitability Metrics.

Specific configurations should not impact any attribute contributing to the CVSS Base metric assessment , i.e., if a specific configuration is required for an attack to succeed, the vulnerable system should be assessed assuming it is in that configuration.

Version History

Date Ver Description
2023-11-01 v1.0 Initial Publication
2024-02-12 v1.1 Added FAQ on Attack Requirements and system configuration

  1. [IEC 61508 - Wikipedia]() 

  2. A “worm” is a “A computer program that can run independently, can propagate a complete working version of itself onto other hosts on a network, and may consume system resources destructively.” ([RFC 4949]()). 

  3. CVSS v4.0 Specification Document, Section 6