CVSS v3.1 specification document (2023)

CVSS version 3.1 released

This page is updated with each release of the CVSS standard. It is currently CVSS version 3.1, released in June 2019. If you want to use a specific version of the Specification Document, use:

  • https://www.first.org/cvss/v3.1/specification-documentfor CVSS version 3.1
  • https://www.first.org/cvss/v3.0/specification-documentfor CVSS version 3.0

also availablein PDF format (469KiB).

The Common Vulnerability Scoring System (CVSS) is an open framework for reporting the characteristics and severity of software vulnerabilities. CVSS consists of three groups of metrics: Baseline, Temporal, and Environmental. The base group represents the intrinsic qualities of a vulnerability that are constant over time and across user environments, the temporal group reflects characteristics of a vulnerability that change over time, and the environmental group represents characteristics of a vulnerability that are unique to the user. environment. The Base metrics produce a score ranging from 0 to 10, which can be modified by the scores for the Temporal and Ambient metrics. A CVSS score is also represented as a string vector, a compressed textual representation of the values ​​used to derive the score. This document provides the official CVSS version 3.1 specification.

The most up-to-date CVSS resources can be found athttps://www.first.org/cvss/

CVSS is owned and managed by FIRST.Org, Inc. (FIRST), a US-based not-for-profit organization whose mission is to assist computer security incident response teams around the world. FIRST reserves the right to update the CVSS and this document from time to time, in its sole discretion. While FIRST owns all rights and interests in the CVSS, it licenses it to the public for free use, subject to the conditions below. FIRST binding is not required to use or implement CVSS. FIRST, however, requires any individual or entity using the CVSS to provide proper attribution, where applicable, that the CVSS is owned by FIRST and used with permission. In addition, FIRST requires as a condition of use that any individual entity posting scores follow the guidelines outlined in this document and provide the score and score vector so that others can understand how the score was obtained.

The Common Vulnerability Scoring System (CVSS) captures the key technical characteristics of software, hardware, and firmware vulnerabilities. Your results include numerical scores that indicate the severity of a vulnerability in relation to other vulnerabilities.

CVSS is composed of three groups of metrics: Baseline, Temporal, and Environmental. The base score reflects the severity of a vulnerability based on its intrinsic characteristics, which are constant over time, and assumes reasonable worst-case impact across different deployed environments. Temporal metrics adjust the baseline severity of a vulnerability based on factors that change over time, such as the availability of exploit code. Environmental metrics adjust baseline and temporal severities for a specific computing environment. They consider factors such as the presence of mitigators in that environment.

Base scores are usually produced by the organization that maintains the vulnerable product or by third parties who score on its behalf. Typically, only basic metrics are published as they do not change over time and are common to all environments. CVSS consumers should supplement the base score with specific temporal and environmental scores for vulnerable product use to produce a more accurate severity for their organizational environment. Consumers can use CVSS information as input to an organizational vulnerability management process that also considers non-CVSS factors to classify threats to their technology infrastructure and make informed remediation decisions. These factors can include: number of customers on a product line, monetary loss due to a breach, threats to life or property, or public opinion about highly publicized vulnerabilities. These are outside the scope of CVSS.

The benefits of CVSS include providing a standardized platform and vendor-independent vulnerability scoring methodology. It is an open structure, giving transparency to individual characteristics and the methodology used to obtain a score.

1.1. Metric

CVSS is made up of three groups of metrics: Baseline, Temporal, and Environmental, each of which consists of a set of metrics, as shown in Figure 1.

CVSS v3.1 specification document (1)

Figure 1: CVSS metric groups

The Base metric group represents the intrinsic characteristics of a vulnerability that are constant over time and across user environments. It is made up of two sets of metrics: the Exploitation metrics and the Impact metrics.

Exploitation metrics reflect the ease and technical means by which the vulnerability could be exploited. That is, they represent characteristics of thething that is vulnerable, which we will formally callvulnerable component. Impact metrics reflect the direct consequence of successful exploitation and represent the consequence for thething that is impacted, which we will formally callimpacted component.

While the vulnerable component is usually a software application, module, driver, etc. (or possibly a hardware device), the affected component could be a software application, a hardware device, or a network resource. This potential to measure the impact of a vulnerability other than the vulnerable component was an important feature introduced with CVSS v3.0. This property is captured by the Scope metric, discussed later.

The Temporal group of metrics reflects the characteristics of a vulnerability that can change over time, but not across user environments. For example, the presence of a friendly exploit kit would increase the CVSS score, while creating an official patch would decrease it.

The environmental metrics group represents the characteristics of a vulnerability that are relevant and unique to a given user environment. Considerations include the presence of security controls that can mitigate some or all of the consequences of a successful attack and the relative importance of a vulnerable system in a technology infrastructure.

Each of these metrics is discussed in more detail below. The User Guide contains evaluation rubrics for the Base Metrics that can be helpful when evaluating.

1.2. score

When an analyst assigns values ​​to base metrics, the base equation calculates a score ranging from 0.0 to 10.0, as illustrated in Figure 2.

CVSS v3.1 specification document (2)

Figure 2: CVSS Equations and Metrics

(Video) What is CVSS? | Common Vulnerability Scoring System

Specifically, the base equation is derived from two sub-equations: the exploitability sub-score equation and the impact sub-score equation. The exploitation subscore equation is derived from the basic exploitation metrics, while the impact subscore equation is derived from the basic impact metrics.

The base score can then be refined by scoring temporal and environmental metrics to more accurately reflect the relative severity a vulnerability poses to a user's environment at a specific point in time. Scoring Temporal and Environmental metrics is not required, but is recommended for more accurate scoring.

Basic and timed metrics are typically specified by vulnerability bulletin analysts, security product vendors, or application vendors because they often have the most accurate information about the characteristics of a vulnerability. Environmental metrics are specified by end-user organizations because they are better able to assess the potential impact of a vulnerability on their own computing environment.

CVSS metric scoring also produces a vector string, a textual representation of the values ​​of the metrics used to score the vulnerability. This vector string is a specifically formatted text string that contains each value assigned to each metric and should always be displayed with the vulnerability score.

The scoring equations and vector chain are explained later.

Note that all metrics should be scored based on the assumption that the attacker has already located and identified the vulnerability. That is, the analyst need not consider the means by which the vulnerability was identified. In addition, many different types of people are likely to rate vulnerabilities (eg, software vendors, vulnerability bulletin analysts, security product vendors); however, note that the vulnerability score must be independent of the individual and their organization.

2.1. Exploration Metrics

As mentioned above, Exploration metrics reflect the characteristics of thething that is vulnerable, which we will formally callvulnerable component. Therefore, each of the Exploit metrics listed below should be scored against the vulnerable component and reflect the properties of the vulnerability that lead to a successful attack.

When scoring basic metrics, it should be assumed that the attacker has advanced knowledge about the weaknesses of the target system, including general configuration and standard defense mechanisms (eg, built-in firewalls, rate limits, traffic control). For example, exploiting a vulnerability that results in repeatable, deterministic success should still be considered a low value for attack complexity, regardless of the attacker's knowledge or capabilities. this is reflected in the environmental metric score group.

Specific settings must not affect any attributes that contribute to the CVSS base score, i.e. if a specific setting is required for an attack to be successful, the vulnerable component should be scored assuming it is in that setting.

2.1.1. Attack Vector (AV)

This metric reflects the context in which vulnerability exploitation is possible. This metric value (and hence the base score) will be higher the more remote (both logically and physically) an attacker is from exploiting the vulnerable component. The assumption is that the number of potential attackers for an exploitable vulnerability on a network is greater than the number of potential attackers who can exploit a vulnerability that requires physical access to a device and therefore warrants a base score. . The list of possible values ​​is shown in Table 1.

Table 1: Attack Vector

metric valueDescription
Red (N)The vulnerable component is tied into the networking stack, and the pool of potential attackers extends beyond the other options listed below, including the entire Internet. This vulnerability is often referred to as "remotely exploitable" and can be considered an exploitable attack.protocol levelone or more network hops (for example, across one or more routers). An example of a network attack is an attacker causing a denial of service (DoS) by sending a specially crafted TCP packet over a wide area network (for example, CVE-2004-0230).
Adjacent (A)The vulnerable component is linked to the network stack, but the attack is limitedprotocol levelto a logically adjacent topology. This could mean that an attack must be launched from the same physical (e.g. Bluetooth or IEEE 802.11) or logical shared network (e.g. local IP subnet) or from a secure or limited administrative domain (e.g. MPLS , secure VPN for an administrative zone network). An example of an adjacent attack would be an ARP flood (IPv4) or neighbor discovery (IPv6) that would cause a denial of service on the local LAN segment (eg CVE-2013-6014).
local (L)The vulnerable component is not linked to the networking stack and the attacker's path is through read/write/execute facilities. Any:
  • the attacker exploits the vulnerability by accessing the target system locally (eg, keyboard, console) or remotely (eg, SSH);o
  • the attacker relies on the user's interaction with another person to take the necessary actions to exploit the vulnerability (for example, using social engineering techniques to trick a legitimate user into opening a malicious document).
Physical (P)The attack requires the attacker to physically touch or manipulate the vulnerable component. The physical interaction can be brief (for example, an attack by an evil maid[^1]) or persistent. An example of this type of attack is a cold boot attack in which an attacker gains access to disk encryption keys after physically accessing the target system. Other examples include peripheral attacks via FireWire/USB Direct Memory Access (DMA).

score guide: When deciding between Network and Adjacent, if an attack can be launched from a wide area network or from outside the domain of the logically adjacent administrative network, use Network. The network must be used even if the attacker needs to be on the same intranet to exploit the vulnerable system (for example, the attacker can only exploit the vulnerability from within a corporate network).

2.1.2. Attack Complexity (CA)

This metric describes the conditions beyond an attacker's control that must exist to exploit the vulnerability. As described below, such conditions may require the collection of additional target information or computational exceptions. It is important to note that the evaluation of this metric excludes any user interaction requirement to exploit the vulnerability (these conditions are captured in the user interaction metric). If a specific configuration is required for an attack to be successful, the basic metrics should be scored assuming the vulnerable component is in that configuration. BaseScore is higher for less complex attacks. The list of possible values ​​is shown in Table 2.

Table 2: Complexity of the attack

metric valueDescription
Low(L)There are no special access conditions or extenuating circumstances. An attacker can expect repeatable success by attacking the vulnerable component.
High (H)A successful attack relies on conditions beyond the attacker's control. That is, a successful attack cannot be carried out at will, but requires the attacker to invest a measurable amount of effort preparing or executing against the vulnerable component before a successful attack can be expected.[^2] For example , a successful attack can depend on an attacker overcoming any of the following conditions:
  • The attacker must gain knowledge about the environment in which the target/vulnerable component exists. For example, a requirement to collect details about target configuration settings, sequence numbers, or shared secrets.
  • The attacker must prepare the target environment to improve exploit reliability. For example, repeated exploitation to beat a race condition or defeat advanced exploit mitigation techniques.
  • The attacker must inject himself into the logical network path between the target and the resource requested by the victim to read and/or modify network communications (eg, a man-in-the-middle attack).

As described in Section 2.1, detailed knowledge of the vulnerable component is outside the scope of Attack Complexity. See this section for additional guidance when scoring attack Complexity when target-specific attack mitigation is present.

2.1.3. Required Privileges (PR)

This metric describes the level of privileges an attacker must havebeforesuccessfully exploiting the vulnerability. The base score is higher if no privileges are required. The list of possible values ​​is shown in Table 3.

Table 3: Required privileges

metric valueDescription
None (N)The attacker is unauthorized prior to the attack and therefore does not require any access to vulnerable system configuration or files to carry out an attack.
Low(L)The attacker requires privileges that provide basic user capabilities that would normally only affect settings and files owned by a user. Alternatively, a low-privilege attacker has the ability to access only non-sensitive resources.
High (H)The attacker requires privileges that provide significant (eg administrative) control over the vulnerable component which allows access to all component configuration and files.

Scoring Guide: Privileges Required is typically None for scrambled credentials vulnerabilities or vulnerabilities that require social engineering (for example, cross-site mirror scripting, cross-site request forgery, or file parsing vulnerability in a PDF reader).

2.1.4. User Interaction (UI)

This metric captures the requirement that a human user, other than the attacker, participate in the successful compromise of the vulnerable component. This metric determines whether the vulnerability could be exploited by the attacker's will alone, or whether a separate user (or user-initiated process) must be involved in some way. The base score is higher when no user interaction is required. The list of possible values ​​is shown in Table 4.

Table 4: User Interaction

metric valueDescription
None (N)Vulnerable system can be exploited without any user interaction.
Required (R)Successful exploitation of this vulnerability requires some user action before the vulnerability can be exploited. For example, a successful exploit may only be possible during the installation of an application by a system administrator.

2.2. Alcance (S)

The Scope metric captures whether a vulnerability in a vulnerable component impacts capabilities in components beyond its scope.security scope.

Formally, asecurity authorityis a mechanism (e.g. an application, an operating system, firmware, a sandbox environment) that defines and enforces access control in terms of how certain subjects/actors (e.g. human users, processes) can access certain objects /restricted resources (eg files, CPU, memory) in a controlled manner. All subjects and objects under the jurisdiction of a singlesecurity authorityare considered less than onesecurity scope. Whether a vulnerability in a vulnerable component could affect a component that is in asecurity scopethan the vulnerable component, a Scope change occurs. Intuitively, whenever a vulnerability impact violates a security/trust boundary and affects components outside the security scope in which the vulnerable component resides, a scope change occurs.

(Video) CompTIA CySA+ Full Course Part 12: Vulnerability Scan Results and CVSS Scores

A component's security scope encompasses other components that provide functionality only for that component, even if those other components have their own security authority. For example, a database used only by an application is considered part of that application's security scope, even if the database has its own security authority, for example, a mechanism that controls access to database records. based on database users and associated database privileges.

The base score is higher when a range change occurs. The list of possible values ​​is shown in Table 5.

Table 5: Scope

metric valueDescription
Unchanged (U)An exploited vulnerability can only affect resources managed by the same security authority. In this case, the vulnerable component and the affected component are the same or both are managed by the same security authority.
changed (C)An exploited vulnerability could affect resources beyond the security scope managed by the vulnerable component's security authority. In this case, the vulnerable component and the affected component are different and are managed by different security authorities.

23. Impact metrics

Impact metrics capture the effects of a successfully exploited vulnerability on the component that suffers the worst outcome most directly and predictably associated with the attack. Analysts should restrict accesses to a reasonable end result that they are confident an attacker can achieve.

Only the increase in access, privileges gained, or other negative outcome as a result of a successful exploit should be considered when qualifying Impactmetrics for a vulnerability. For example, consider a vulnerability that requires read-only permissions before exploiting the vulnerability. After a successful exploit, the attacker retains the same read access level and gains write access. In this case, only the Integrity impact metric should be scored and the Sensitivity and Availability impact metrics should be set to None.

Note that when observing a delta change in impact, theultimate impactIt should be used. For example, if an attacker starts with partial access to restricted information (Low Confidentiality) and successful exploitation of the vulnerability results in a complete loss of confidentiality (High Confidentiality), the resulting base CVSS score should reference the value of "end game " impact metric (high confidentiality).

If no scope changes occurred, the Impact metrics should reflect the Confidentiality, Integrity, and Availability impacts on the vulnerable component. However, if a scope change has occurred, the Impact metrics should reflect the Confidentiality, Integrity, and Availability impacts on the vulnerable component or affected component, whichever suffers the most severe outcome.

2.3.1. Confidentiality (C)

This metric measures the impact on the confidentiality of information resources managed by a software component due to a successfully exploited vulnerability. Confidentiality refers to limiting access and disclosure of information to authorized users only, as well as preventing access or disclosure to unauthorized persons. The base score is higher when the loss of the affected component is higher. The list of possible values ​​is shown in Table 6.

Table 6: Confidentiality

metric valueDescription
High (H)There is a complete loss of confidentiality, resulting in all capabilities of the affected component being disclosed to the attacker. Alternatively, access is only gained to some restricted information, but the information disclosed has a direct and serious impact. For example, an attacker steals an administrator's password or private encryption keys from a web server.
Low(L)There is some loss of confidentiality. Access to some restricted information is gained, but the attacker has no control over what information is gained, or the amount or type of loss is limited. Disclosure of information does not cause direct and serious harm to the affected component.
None (N)There is no loss of confidentiality within the affected component.

2.3.2. integrity (me)

This metric measures the health impact of a successfully exploited vulnerability. Integrity refers to the reliability and veracity of information. The base score is higher when the consequence for the affected component is higher. The list of possible values ​​is shown in Table 7.

Table 7: Integrity

metric valueDescription
High (H)There is a complete loss of integrity or a complete loss of protection. For example, the attacker could modify any or all of the files protected by the affected component. Alternatively, only a few files can be modified, but malicious modification would have a direct and severe consequence for the affected component.
Low(L)Data modification is possible, but the attacker has no control over the consequences of a modification, or the amount of modification is limited. Modifying data does not have a direct and serious impact on the affected component.
None (N)There is no loss of integrity in the affected component.

2.3.3. Availability (A)

This metric measures the availability impact of the affected component as a result of a successfully exploited vulnerability. While the Confidentiality and Integrity impact metrics apply to loss of confidentiality or integrity ofdata(e.g. data, files) used by the affected component, this metric refers to the loss of availability of the affected component itself, such as a network service (e.g. web, database, email). Since availability refers to the accessibility of information resources, attacks that consume network bandwidth, processor cycles, or disk space affect the availability of an affected component. The base score is higher when the consequence for the affected component is higher. The list of possible values ​​is shown in Table 8.

Table 8: Availability

metric valueDescription
High (H)There is a complete loss of availability, which makes it possible for the attacker to completely deny access to the affected component's resources; this loss is sustained (as long as the attacker continues to launch the attack) or persistent (the condition persists even after the attack is complete). Alternatively, the attacker has the ability to deny some availability, but the loss of availability presents a direct and severe consequence to the affected component (e.g., attacker cannot break existing connections, but can prevent new connections; attacker can repeatedly exploit a vulnerability that, in each instance of a successful attack, leaks only a small amount of memory, but after repeated exploitation renders a service completely unavailable).
Low(L)Performance is reduced or there are interruptions in resource availability. Even though repeated exploitation of the vulnerability is possible, the attacker does not have the ability to completely deny service to legitimate users. Features in the affected component are partially available all of the time, or fully available only some of the time, but generally there are no direct and serious consequences for the affected component.
None (N)There is no impact on the availability of the affected component.

Time metrics measure the current state of exploit techniques or code availability, the existence of patches or fixes, or confidence in a vulnerability description.

3.1. Explore Code Maturity (E)

This metric measures the likelihood that the vulnerability will be exploited and is typically based on the current state of exploit techniques, availability of exploit code, or active exploitation "in the wild". The public availability of user-friendly exploit code increases the number of potential attackers, including those who are not trained, thereby increasing the severity of the vulnerability. Initially, real-world exploration may only be theoretical. You can follow along with the publication of proof-of-concept code, working exploit code, or technical details needed to exploit the vulnerability. Additionally, available exploit code can progress from a proof-of-concept demo. to exploit code that consistently manages to exploit the vulnerability. In severe cases, it can be delivered as the payload of a network-based virus or worm or other automated attack tools.

The list of possible values ​​is shown in Table 9. The more easily a vulnerability can be exploited, the higher the vulnerability score.

Table 9: Exploit code maturity

metric valueDescription
Not defined (X)Assigning this value indicates that there is not enough information to choose one of the other values ​​and it has no impact on the overall score for the time, ie it has the same effect on the score as assigning High.
High (H)Functional standalone code exists or no exploit is required (manual trigger) and details are widely available. Exploit code works in all situations or is actively delivered through a standalone agent (such as a worm or virus). Systems connected to the network are likely to face exploit or exploit attempts. Exploit development has reached the level of reliable, widely available, and easy-to-use automated tools.
Functional (F)Functional exploit code is available. The code works in most situations where the vulnerability exists.
Proof of Concept (P)Proof-of-concept exploit code is available or a demo attack is impractical for most systems. The code or technique is not functional in all situations and may require substantial modification by a skilled attacker.
Not tested (U)There is no exploit code available or an exploit is theoretical.

The fix level of a vulnerability is an important prioritization factor. The typical vulnerability is not patched when first published. Workarounds or hotfixes may offer workarounds until an official patch or update is released. Each of these respective stages adjusts the time score downwards, reflecting decreasing urgency as the fix becomes final. The list of possible values ​​is shown in Table 10. The less official and permanent a patch is, the higher the vulnerability score.

Table 10: Level of remediation

metric valueDescription
Not defined (X)Assigning this value indicates that there is not enough information to choose one of the other values ​​and it has no impact on the overall score for the time, ie it has the same effect on the score as assigning Not Available.
Not available (U)Either there is no solution available or it is impossible to apply.
Alternative solution (W)There is an unofficial, non-vendor solution available. In some cases, users of affected technology will create their own patch or provide steps to correct or mitigate the vulnerability.
Workaround (T)There is an official but temporary solution available. This includes instances where the vendor issues a hotfix, tool, or workaround.
Official Solution (O)A complete vendor solution is available. The provider has issued an official patch or an update is available.

3.3. Confidence Report (CR)

This metric measures the degree of confidence that the vulnerability exists and the credibility of known technical details. Sometimes only the existence of vulnerabilities is published, but without specific details. For example, an impact may be recognized as undesirable, but the root cause may not be known. The vulnerability can be further substantiated by research that suggests where the vulnerability might be, although the research may not be certain. Finally, a vulnerability can be confirmed by recognizing the author or provider of the affected technology. The urgency of a vulnerability is greater when you know for sure that it exists. This metric also suggests the level of technical expertise available to potential attackers. The list of possible values ​​is shown in Table 11. The more a vulnerability is validated by the vendor or other trusted sources, the higher the score.

Table 11: Confidence Report

metric valueDescription
Not defined (X)Assigning this value indicates that there is not enough information to choose one of the other values ​​and it has no impact on the overall time score, ie it has the same effect on the score as assigning Confirmed.
Confirmed (C)There are detailed reports or functional reproduction is possible (functional exploits can provide this). Source code is available to independently verify research claims or the author or vendor of the affected code has confirmed the presence of the vulnerability.
Reasonable (R)Significant details are published, but researchers do not have full confidence in the root cause or access to source code to fully confirm all interactions that could lead to the outcome. However, there is reasonable confidence that the bug is reproducible and at least one impact can be verified (proof-of-concept exploits can provide this). An example is a detailed search for a vulnerability with an explanation (possibly obfuscated or "left as an exercise for the reader") that provides reassurance on how to reproduce the results.
Unknown (U)There are reports of impacts that indicate the presence of a vulnerability. Reports indicate that the cause of the vulnerability is unknown, or reports may differ on the cause or impacts of the vulnerability. Reporters are unsure of the true nature of the vulnerability and there is little confidence in the validity of the reports or whether a static base score can be applied due to the differences described. An example is a bug report indicating an intermittent but non-reproducible crash, with evidence of memory corruption suggesting this could result in a denial of service or potentially more serious impacts.

These metrics allow the analyst to customize the CVSS score based on the importance of the affected IT asset to the user's organization, as measured in terms of complementary/alternative security controls in place, confidentiality, integrity, and availability. Metrics are the modified equivalent of Basemetrics and are assigned values ​​based on the location of components within the organization's infrastructure.

(Video) CVSS Scoring

4.1. Security requirements (CR, IR, AR)

These metrics allow the analyst to customize the CVSS score based on the importance of the affected IT asset to the user's organization, as measured in terms of confidentiality, integrity, and availability. That is, if an IT asset supports a business function for which Availability is more important, the analyst may assign a higher value to Availability over Confidentiality and Integrity. Each security requirement has three possible values: low, medium, or high.

The total effect on the environmental score is determined by the corresponding Modified Base Impact metrics. That is, these metrics modify the environmental score by re-weighting the impact metrics of Modified Confidentiality, Integrity, and Availability. For example, the Modified Confidentiality (MC) impact metric is increased in weight if the Confidentiality Requirement (CR) is high. Additionally, the Modified Confidentiality impact metric loses weight if the Confidentiality Requirement is Low. The weight of the modified confidentiality impact metric is neutral if the confidentiality requirement is medium. This same process applies to Health and Availability requirements.

Note that the Confidentiality Requirement will not affect the Environmental Score if the Confidentiality Impact (Modified Baseline) is set to None. Also, increasing the Confidentiality Requirement from Medium to High will not change the environmental score when the Impact (Base Modified) metrics are set to High. This is because the Modified Impact Subscore (the part of the Modified Base Score that calculates impact) is already at the maximum value of 10.

The list of possible values ​​is presented in Table 12. For brevity, the same table is used for all three metrics. The higher the Security Requirement, the higher the score (remember that Medium is considered the default value).

Table 12: Security requirements

metric valueDescription
Not defined (X)Assigning this value indicates that there is not enough information to choose one of the other values ​​and it has no impact on the overall score of the environment, that is, it has the same effect on the score as assigning Medium.
High (H)Loss of [Confidentiality | Integrity | Availability] is likely to have a catastrophic adverse effect on the organization or people associated with the organization (eg, employees, customers).
Medium (M)Loss of [Confidentiality | Integrity | Availability] is likely to have a serious adverse effect on the organization or people associated with the organization (eg, employees, customers).
Low(L)Loss of [Confidentiality | Integrity | Availability] is likely to have only a limited adverse effect on the organization or individuals associated with the organization (eg, employees, customers).

4.2. Modified Base Metrics

These metrics allow the analyst to override individual basic metrics based on the specifics of a user's environment. Characteristics that affect Exploration, Scope or Impact may be reflected through an appropriately modified Environmental Score.

The total effect on the environmental score is determined by the corresponding basic metrics. That is, these metrics modify the Environmental Rating by overriding the base metric values, prior to the application of the Environmental Safety Requirements. For example, the default setting for a vulnerable component might be to run a listener with administrator privileges, where a compromise could give an attacker Confidentiality, Integrity, and Availability access that are all High. However, in the analyst's environment, that same Internet service might be running with reduced privileges; in this case, Modified Confidentiality, Modified Integrity, and Modified Availability can be set to Low.

For brevity, only the modified base metric names are mentioned. Each modified environmental metric has the same values ​​as its corresponding base metric, plus a value of Not Defined. Undefined is the default value and uses the value of the associated base metric.

The intent of this metric is to define the mitigations implemented for a given environment. It is acceptable to use the modified metrics to represent situations that increase the base score. For example, a component's default configuration may require high privileges to access a specific function, but in an analyst environment, privileges may not be required. The analyst can set Privileges Required to High and Modified Privileges Required to None to reflect this more severe condition in their specific environment.

The list of possible values ​​is shown in Table 13.

Table 13: Modified base metrics

Modified base metriccorresponding values
Modified Attack Vector (MAV) Modified Attack Complexity (MAC) Modified Required Privileges (MPR) Modified User Interaction (MUI) Modified Scope (MS) Modified Confidentiality (MC) Modified Integrity (MI) Modified Availability (MA)

Same values ​​as the corresponding Base Metric (see Base Metrics above), as well as Undefined (the default value).

For some purposes, it is useful to have a textual representation of Numerical, Temporal, and Environmental Base scores. All scores can be assigned to the qualitative ratings defined in Table 14.[^3]

Table 14: Qualitative severity rating scale

ClassificationCVSS score
None0,0
Low0,1 - 3,9
Half4,0 - 6,9
alto7,0 - 8,9
Critical9,0 - 10,0

For example, a CVSS base score of 4.0 has an associated severity rating of Medium. Use of these qualitative severity ratings is optional and does not need to be included when publishing CVSS scores. They are intended to help organizations properly assess and prioritize their vulnerability management processes.

The CVSS v3.1 vector string is a text representation of a set of CVSS metrics. It is commonly used to record or transfer CVSS metric information in a concise form.

The CVSS v3.1 vector string begins with the tag "CVSS:" and a numerical representation of the current version, "3.1". The metric information comes in the form of a set of metrics, each preceded by a forward slash, "/", which acts as a delimiter. Each metric is a short metric name, a colon, ":" and its associated short metric value. Short forms are defined earlier in this specification (in parentheses after each metric name and value) and are summarized in the following table.

An array chain must contain metrics in the order shown in Table 15, although other orders are valid. All basic metrics must be included in a vector string. Temporal and environmental metrics are optional, and omitted metrics are assumed to have the value Not Defined (X). Metrics with a value of NotDefined can be explicitly included in an array string, if desired. Programs that read CVSS v3.1 vector strings must accept metrics in any order and treat unspecified temporal and environmental values ​​as undefined. A vector string must not include the same metric more than once.

Table 15: Base, Temporal and Environmental Vectors

metric groupMetric name (and short form)possible valuesMandatory?
BaseAttack Vector (AV)[N,A,L,P]Sim
Attack Complexity (CA)[L,H]Sim
Required Privileges (PR)[N,L,H]Sim
User Interaction (UI)[N,R]Sim
Alcance (S)[U,C]Sim
Confidentiality (C)[H,L,N]Sim
integrity (me)[H,L,N]Sim
Availability (A)[H,L,N]Sim
TemporalExplore Code Maturity (E)[X,H,F,P,U]No
Remediation Level (RL)[X,U,W,T,O]No
Confidence Report (CR)[X, C, R, U]No
EnvironmentalConfidentiality Requirement (CR)[X,A,M,L]No
Integrity Requirement (IR)[X,A,M,L]No
Availability Requirement (AR)[X,A,M,L]No
Modified Attack Vector (MAV)[X,N,A,L,P]No
Modified Attack Complexity (MAC)[X,L,H]No
Required Modified Privileges (MPR)[X,N,L,H]No
Modified User Interaction (MUI)[X,N,R]No
Modified Range (MS)[X, U, C]No
Modified Confidentiality (MC)[X,N,L,H]No
Modified Integrity (MI)[X,N,L,H]No
Modified Availability (MA)[X,N,L,H]No

For example, a vulnerability with base metric values ​​of "Attack Vector: Network, Attack Complexity: Low, Privileges Required: High, User Interaction: None, Scope: No change, Confidentiality: Low, Integrity: Low, Availability: None" and no temporary or environmental metric specified would produce the following vector:

CVSS:3.1/AV:N/AC:L/PR:H/UI:N/S:U/C:L/I:L/A:N

The same example with the addition of "Exploitability: Functional, RemediationLevel: Not Defined" and with the metrics in a non-preferred order would produce the following array:

CVSS:3.1/S:U/AV:N/AC:L/PR:H/UI:N/C:L/I:L/A:N/E:F/RL:X

(Video) RVAs3c: Seth Hanford - CVSS v3 -- This One Goes to 11

CVSS v3.1 equations are defined in the subsections below. They are based on helper functions defined as follows:

  • Minimumreturns the smaller of its two arguments.
  • roundingreturn theleast number, specified to 1 decimal place, equal to or greater than your input. For example,Summary (4.02)Returns4.1; ySummary (4.00)Returns4.0. To ensure consistent results across all hardware and programming languages, see Appendix A for tips for implementers on how to avoid small inaccuracies introduced in some floating-point implementations.

Replace the individual metrics used in the equations with the associated constant listed in Section 7.4.

7.1. Basic metric equations

The base score formula depends on the sub-formulas for Impact Sub-Score (ISS), Impact and Exploitability, all defined below:

EEI =1 - [ (1 - Confidentiality) × (1 - Integrity) × (1 - Availability) ]
Impact =
If the scope doesn't change6.42 × International Space Station
If the scope changes7,52 × (ISS - 0,029) - 3,25 × (ISS - 0,02)15
Exploitability =8.22 × Attack Vector × Attack Complexity ×
PrivilegiosRequeridos × UserInteraction
Base Score =
If Impact \<= 00,the rest
If the scope doesn't changeSummary (Minimum [(Impact + Exploitability), 10])
If the scope changesSummary (Minimum [1.08 × (Impact + Exploitability), 10])

7.2. Temporal metric equations

Temporary Score =Resumen (BaseScore × ExploitCodeMaturity × RemediationLevel × ReportConfidence)

7.3. Environmental Metric Equations

The environmental scoring formula relies on the subformulas for secondary Modified Impact (MISS), Modified Impact, and Modified Exploitability scores, all defined below:

PASSWORD =Minimum ( 1 - [ (1 - Confidentiality Requirement × Modified Confidentiality) × (1 - Integrity Requirement × Modified Integrity) × (1 - Availability Requirement × Modified Availability) ], 0.915)
modified impact =
If ModifiedScope has not changed6.42 × ABSENT
Yes if the ModifiedScope changes7.52 × (FAILURE - 0.029) - 3.25 × (FAILURE × 0.9731 - 0.02)13
Modified Exploitability =8.22 × ModifiedAttackVector × ModifiedAttackComplexity × ModifiedPrivilegesRequired × ModifiedUserInteraction

Note that the exponent at the end of the ModifiedImpact subformula is 13, which differs from CVSS v3.0. See the User Guide for more details on this change.

Environmental Score =
If Modified Impact \<= 00,the rest
Se ModifiedScope forRoundup ( Roundup [Minimum ([Modified Impact + Modified Exploit], 10) ] × ExploitCodeMaturity × RemediationLevel × ReportConfidence)
Unchanged
Se ModifiedScope forRoundup ( Roundup [Minimum (1.08 × [Modified Impact + Modified Exploit], 10) ] × ExploitCodeMaturity × RemediationLevel × ReportConfidence)
To change

7.4. metric values

Each metric value has an associated constant that is used in formulas, as defined in Table 16.

Table 16: Metric values

Metricmetric valuenumeric value
Attack Vector / Modified Attack VectorRed0,85
Adjacent0,62
Local0,55
Physicist0,2
Attack Complexity / Modified Attack ComplexityLow0,77
alto0,44
Required Privileges / Required Modified PrivilegesNone0,85
Low0.62 (or 0.68 if scope changed/interval changed)
alto0.27 (or 0.5 if scope changed/scope modified)
User Interaction / Modified User InteractionNone0,85
Mandatory0,62
Confidentiality / Integrity / Availability / Modified Confidentiality / Modified Integrity / Modified Availabilityalto0,56
Low0,22
None0
Explode code maturityUndefined1
alto1
Functional0,97
concept proof0,94
not tested0,91
correction levelUndefined1
unavailable1
alternative solution0,97
interim repair0,96
official arrangement0,95
confidence reportUndefined1
Confirmed1
Reasonable0,96
A stranger0,92
Confidentiality requirement / Integrity requirement / Availability requirementUndefined1
alto1,5
Half1
Low0,5

7.5. A word about CVSS v3.1 equations and scoring

The CVSS v3.1 formula provides a mathematical approximation of all possible metric combinations ranked in order of severity (a vulnerability lookup table). for real vulnerabilities and a severity group (low, medium, high, critical). Having defined the acceptable numerical ranges for each severity level, the SIG collaborated with Deloitte & Touche LLP to adjust the formula parameters to align the metric combinations with the severity ratings proposed by the SIG.

Since there are a limited number of numerical results (101 results, ranging from 0.0 to 10.0), multiple combinations of scores can produce the same numerical score. Also, some numerical scores may be omitted because the weights and calculations are derived from the severity ranking of the metric combinations. Also, in some cases, combinations of metrics may deviate from the desired severity threshold. This is unavoidable and a simple correction is not readily available because adjustments made to a metric value or equation parameter to correct one deviation cause other potentially more serious deviations.

By consensus, and similarly to CVSS v2.0, the acceptable deviation was a value of 0.5. That is, all combinations of metric values ​​used to derive the weights and calculation will produce a numerical score within your assigned severity level or within 0.5 of that assigned severity level. For example, a combination that should be rated "high" might have a numeric score between 6.6 and 9.3. Finally, CVSS v3.1 retains the range of 0.0 to 10.0 for backward compatibility.

Simple implementations of the Roundup function defined in Section 7 are likely to produce different results on programming languages ​​and hardware platforms. This is due to the small inaccuracies that occur when using floating point arithmetic. For example, although the intuitive result of0,1+0,2es0,3, JavaScript implementations on many systems return0,300000000000000004. A simple Roundup implementation would round this up to0,4, which is counterintuitive.

CVSS formula implementers should take steps to avoid these types of problems. Different techniques may be required for different languages ​​and platforms, and some may provide standard functionality that minimizes or completely avoids such problems.

One suggested approach is for the Roundup function to first multiply its input by 100,000 and convert it tocloserall. Rounding should be done using integer arithmetic only, which is not subject to floating point inaccuracies. An example pseudocode for such an implementation is:

  1. Rounding function (input):
  2. int_input = round_to_nearest_integer (entrada * 100000)
  3. if (input_int % 10000) == 0:
  4. devolver int_input / 100000.0
  5. the rest:
  6. return(floor(int_input / 10000) + 1) / 10.0

HepisoThe function on line 6 represents integer division, that is, the largest integer value less than or equal to your input. Many programming languages ​​include a floor function by default.

Line 3 checks that the four least significant digits of the integer are all zeros, for example, an input of 1.200003 would be converted by line 2 to 120.003, making the result of the operation modulo 0 and therefore itecondition statement isTRUE. ETRUE, no further rounding is required. YesFALSE, the integer is incremented by 0.1 before being returned, although line 6 does this for numbers ten times larger than the result to use integer arithmetic.

FIRST sincerely appreciates contributions from the following members of the CVSS Special Interest Group (SIG), listed in alphabetical order:

  • Adam Maris (Red Hat)
  • Arkadeep Kundu (Dell)
  • Arnold Yoon (Dell)
  • Artistic Mansion (CERT/CC)
  • Bruce Lowenthal (Oracle)
  • Bruce Monroe (Intel)
  • Charles Wergin (NIST)
  • Christopher Turner (NIST)
  • Cosby Clark (IBM)
  • Dale Rich (Deposit and Clearing Trust Corporation)
  • Damir 'Gaus' Rajnovic (Panasonic)
  • Daniel Sommerfeld (Microsoft)<- Darius Wiles (Oracle)
  • Dave Dugal (enebro)
  • Deana Shick (CERT/CC)
  • Fábio Olive Leite (red hat)
  • James Kohli ️(GE Healthcare)
  • Jeffrey Heller (Sandia National Laboratories)
  • John Stupi (Cisco)
  • Jorge Orchilles (Citi)
  • Karen Scarfone (Scarfone Cybersecurity)
  • Luca Allodi (Eindhoven University of Technology)
  • Masato Terada (Information Technology Promotion Agency, Japan)
  • Max Heitman (Citi)
  • Melinda Rosario (Secure Works)
  • Nazira Carlage (Dell)
  • Rani Kehat (Radiflow)
  • Renchie Abraham (SAP)
  • Sasha Romanosky (Carnegie Mellon University)
  • Scott Moore (IBM)
  • Troy Fridley (Cisco)
  • Vijayamurugan Pushpanathan (Schneider Electric)
  • Wagner Santos (UFCG)

FIRST would also like to thank Abigail Palacios and Vivian Smith of ConradInc. for their tireless work facilitating SIG CVSS meetings.

  • cvss home page-https://www.first.org/cvss/
    The main web page for all CVSS resources, including the latest version of the CVSS standard.

  • specification document-https://www.first.org/cvss/specification-document
    The most recent revision of this document, which defines the metrics, formulas, qualitative rating scale, and vector chain.

  • user's Guide-https://www.first.org/cvss/guía-del-usuario
    As a supplement to the Specification, the User's Guide includes a more detailed discussion of the CVSS standard, including specific use cases, scoring guidelines, scoring rubrics, and a glossary of terms used in the Specification and User's Guide documents.

  • Examples document-https://www.first.org/cvss/ejemplos
    Includes public vulnerability scores and explanations of why certain metric values ​​were chosen.

  • Calculator-https://www.first.org/cvss/calculator/3.1
    A reference implementation of the CVSS pattern that can be used to generate scores. The underlying code is documented and can be used as part of other implementations.

    (Video) Common Vulnerability Scoring System [ CVSS ] | [ தமிழில் ]

  • JSON and XML schemas-https://www.first.org/cvss/data-representations
    Data representations for CVSS metrics, scores, and vector strings in JSON Schema and XML Schema Definition (XSD) representations. They can be used to store and transfer CVSS information in defined JSON and XML formats.

Videos

1. شرح بالعربى {Common Vulnerability Scoring System {CVSS
(الخير مع ايمن شوقى Ayman Shawky)
2. Ethical Hacking Course: What is CVSS Common Vulnerability Scoring System |Craw Cyber Security
(CRAW SECURITY - FULL CYBER SECURITY COURSE VIDEOS)
3. What is PrintNightmare Vulnerability (CVE-2021-34527) - ALL YOU NEED TO KNOW
(Windows Love)
4. NVD, CVE, and CVSS Video
(Paula Dewitte)
5. Scoring Security Vulnerabilities in Medical Devices: Rubric for CVSS
(FIRST)
6. How to document Penetration Testing Findings
(Chris Dale)
Top Articles
Latest Posts
Article information

Author: Jonah Leffler

Last Updated: 07/11/2023

Views: 6023

Rating: 4.4 / 5 (45 voted)

Reviews: 92% of readers found this page helpful

Author information

Name: Jonah Leffler

Birthday: 1997-10-27

Address: 8987 Kieth Ports, Luettgenland, CT 54657-9808

Phone: +2611128251586

Job: Mining Supervisor

Hobby: Worldbuilding, Electronics, Amateur radio, Skiing, Cycling, Jogging, Taxidermy

Introduction: My name is Jonah Leffler, I am a determined, faithful, outstanding, inexpensive, cheerful, determined, smiling person who loves writing and wants to share my knowledge and understanding with you.