-
Notifications
You must be signed in to change notification settings - Fork 14.8k
Improve description of what is considered a security issue #147035
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -157,6 +157,7 @@ Members of the LLVM Security Response Group are expected to: | |
* Help write and review patches to address security issues. | ||
* Participate in the member nomination and removal processes. | ||
|
||
.. _security-group-discussion-medium: | ||
|
||
Discussion Medium | ||
================= | ||
|
@@ -204,6 +205,10 @@ The LLVM Security Policy may be changed by majority vote of the LLVM Security Re | |
What is considered a security issue? | ||
==================================== | ||
|
||
We define "security-sensitive" to mean that a discovered bug or vulnerability | ||
may require coordinated disclosure, and therefore should be reported to the LLVM | ||
Security Response group rather than publishing in the public bug tracker. | ||
|
||
The LLVM Project has a significant amount of code, and not all of it is | ||
considered security-sensitive. This is particularly true because LLVM is used in | ||
a wide variety of circumstances: there are different threat models, untrusted | ||
|
@@ -217,31 +222,52 @@ security-sensitive). This requires a rationale, and buy-in from the LLVM | |
community as for any RFC. In some cases, parts of the codebase could be handled | ||
as security-sensitive but need significant work to get to the stage where that's | ||
manageable. The LLVM community will need to decide whether it wants to invest in | ||
making these parts of the code securable, and maintain these security | ||
properties over time. In all cases the LLVM Security Response Group should be consulted, | ||
since they'll be responding to security issues filed against these parts of the | ||
codebase. | ||
|
||
If you're not sure whether an issue is in-scope for this security process or | ||
not, err towards assuming that it is. The Security Response Group might agree or disagree | ||
and will explain its rationale in the report, as well as update this document | ||
through the above process. | ||
|
||
The security-sensitive parts of the LLVM Project currently are the following. | ||
Note that this list can change over time. | ||
|
||
* None are currently defined. Please don't let this stop you from reporting | ||
issues to the LLVM Security Response Group that you believe are security-sensitive. | ||
|
||
The parts of the LLVM Project which are currently treated as non-security | ||
sensitive are the following. Note that this list can change over time. | ||
|
||
* Language front-ends, such as clang, for which a malicious input file can cause | ||
undesirable behavior. For example, a maliciously crafted C or Rust source file | ||
can cause arbitrary code to execute in LLVM. These parts of LLVM haven't been | ||
hardened, and compiling untrusted code usually also includes running utilities | ||
such as `make` which can more readily perform malicious things. | ||
|
||
making these parts of the code securable, and maintain these security properties | ||
over time. In all cases the LLVM Security Response Group | ||
`should be consulted <security-group-discussion-medium_>`__, since they'll be | ||
responding to security issues filed against these parts of the codebase. | ||
|
||
The security-sensitive parts of the LLVM Project currently are the following: | ||
|
||
* Code generation: most miscompilations are not security sensitive. However, a | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Could be worth going into a bit more detail here. Otherwise I expect something like, miscompile causes a segfault leading to denial of service attack. I think we would be looking for something systemic that affected multiple programs in a predictable way that an attacker could exploit. I've found it difficult to think of concrete examples:
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yes, it would be nice if we could be more concrete in the description here. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'm inclined to lean into the existence (or credible plausibility of existance) of a reproducer being the key distinguisher here, i.e. leave the text as is. Even in "obviously" security-implicated features like compiler hardening techniques, most software bugs realistically will not reduce the security of the customer's executable beyond a handwave-y DoS argument. (Yes, I'm aware there's an A in CIA, but in my opinion the security community has gone way overboard in classifying every potential process crash as a CVE, and I don't think I'm alone in this assessment.) I've also seen cases (not in LLVM) where a "ordinary" miscompilation bug was systematically reproducible in security-critical code, such that it necessitated a security response despite being seemingly benign. I think it's still a good idea for issues @smithp35 mentioned to be run by us, because there is an elevated risk that they could lead to an actual security vulnerability, but at the same time I don't want to give security researchers the impression that these sections of the code must automatically entitle them to a CVE they can slap on their CVs. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Thanks for the interesting discussion on this! |
||
miscompilation where there are clear indications that it can result in the | ||
produced binary becoming significantly easier to exploit could be considered | ||
security sensitive, and should be reported to the security response group. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'd like the wording for this point to be more defensive. Something along these lines:
The main thing this adds is that it must affect real-world code to even be up for consideration. Otherwise it is too easy to come with with plausible hypotheticals. If something does not affect real-world code, then coordinated disclosure is obviously unnecessary. |
||
* Run-time libraries: only parts of the run-time libraries are considered | ||
security-sensitive. The parts that are not considered security-sensitive are | ||
documented below. | ||
|
||
The following parts of the LLVM Project are currently treated as non-security | ||
sensitive: | ||
|
||
* LLVM's language frontends, analyzers, optimizers, and code generators for | ||
kbeyls marked this conversation as resolved.
Show resolved
Hide resolved
|
||
which a malicious input can cause undesirable behavior. For example, a | ||
maliciously crafted C, Rust or bitcode input file can cause arbitrary code to | ||
execute in LLVM. These parts of LLVM haven't been hardened, and handling | ||
untrusted code usually also includes running utilities such as make which can | ||
more readily perform malicious things. For example, vulnerabilities in clang, | ||
clangd, or the LLVM optimizer in a JIT caused by untrusted inputs are not | ||
security-sensitive. | ||
* The following parts of the run-time libraries are explicitly not considered | ||
security-sensitive: | ||
|
||
* parts of the run-time libraries that are not meant to be included in | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We might be able to be more specific. As well as (most of) the sanitizers, I expect instrumentation for profiling or coverage would be included. I think that leaves language runtime like builtins, the scudo allocator and other security feature runtimes which would be in scope. For the sanitizers I think it is only parts of ubsan. Helpfully; ubsan has this paragraph which says that traps-only and minimal-runtime is suitable for production https://clang.llvm.org/docs/UndefinedBehaviorSanitizer.html#security-considerations There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I was thinking that there is value in sticking with "meant to be included in production binaries" and not specify that further in detail, as otherwise, we might end up having problems with the list becoming outdated over time... There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'm generally fine with the wording as-is, but we may want to add @smithp35 as an example of a "gotcha" that developers should consider. That being said, as I sit here trying to come up with a wording, I can't come up with anything that doesn't sound excessively stilted. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We could add more examples to the "For example, ...." sentence. But I'm not sure it would add enough value to offset the "cost" of making the documentation a bit longer to read through... I guess one way to change the sentence would be "For example, most sanitizers are not considered security-sensitive as they are meant to be used during development only, not in production. Profiling and coverage-related run-time code is similarly meant for development only". Overall, I'm still in favour of keeping the text as simple as it is. Of course, we can always adapt/improve it based on future experience. |
||
production binaries. For example, most sanitizers are not considered | ||
security-sensitive as they are meant to be used during development only, not | ||
in production. | ||
* for libc and libc++: if a user calls library functionality in an undefined | ||
or otherwise incorrect way, this will most likely not be considered a | ||
security issue, unless the libc/libc++ documentation explicitly promises to | ||
harden or catch that specific undefined behaviour or incorrect usage. | ||
* unwinding and exception handling: the implementations are not hardened | ||
against malformed or malicious unwind or exception handling data. This is | ||
not considered security sensitive. | ||
|
||
Note that both the explicit security-sensitive and explicit non-security | ||
sensitive lists can change over time. If you're not sure whether an issue is | ||
in-scope for this security process or not, err towards assuming that it is. The | ||
Security Response Group might agree or disagree and will explain its rationale | ||
in the report, as well as update this document through the above process. | ||
|
||
.. _CVE process: https://cve.mitre.org | ||
.. _report a vulnerability: https://github.com/llvm/llvm-security-repo/security/advisories/new | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if it would be helpful to add a concise tl;dr here?
Admittedly, "artifacts" isn't as precise as I'd like it to be, but maybe a bit of iteration on ^ can help us set the tone for this section?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMHO this is too specific for a tl;dr section.
I'm also not wild about calling out artifacts, since I wouldn't consider LLVM runtimes to be "artifacts" of LLVM but part of it (putting aside the obvious technicality of LLVM being used to build LLVM), yet we very clearly mention that LLVM runtimes are in scope if they're intended to be consumed by downstream users.
I'd focus on the intended user since I see that as the main distinguisher throughout this document, something along the lines of "issues with LLVM that affect developers who consciously chose to use LLVM tooling to generate binaries are considered out-of-scope, issues with LLVM which affect the end users of said developer are in scope."
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I'm not strongly of the opinion that a tldr is necessary, so I'm also content if we'd rather go without.
In any case, I do agree that intended user is a much better focus than 'artifacts' - thanks for that. :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I quite like the sentence that @wphuhn-intel suggested above as a general principle: "issues with LLVM that affect developers who consciously chose to use LLVM tooling to generate binaries are considered out-of-scope, issues with LLVM which affect the end users of said developer are in scope." That being said, I'm not sure if it would correctly cover all the different cases we do document further in detail.
I'd prefer to land this PR without a tl;dr, as is.
We can then see what the feedback is on the this version of the documentation, and add a well-thought-through tl;dr if the feedback is clear that that would be a major improvement.