-
Notifications
You must be signed in to change notification settings - Fork 1.7k
[test_runner] test.py should presumably support errors in a non-entry-point library #44990
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
@munificent, what do you think? |
Yeah, this is reasonable. The static error test stuff doesn't currently support this because we didn't have a need for it in the tests I'd seen but it makes sense to add support for this. |
This test failed because it had a test outcome expectation comment in a library which is not the entry point (cf. #44990). This CL changes the test such that said comment is located in the entry point. The trade-off is that this test now has a "reverse" import: A library with null safety enabled imports a legacy library (but, apparently, this does not cause the test to fail). PS: This means that we don't have a test that will go green when #44990 is resolved. Change-Id: Ie94bff22ce75bd662752c5814917e141fafc72ed Reviewed-on: https://dart-review.googlesource.com/c/sdk/+/191365 Reviewed-by: Leaf Petersen <[email protected]> Commit-Queue: Erik Ernst <[email protected]>
I hit this as well. // lib.dart
library [Libraries_and_Scripts_A03_t10_lib];
// ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
// [analyzer] unspecified
// [cfe] unspecified
// test.dart
import "lib.dart";
main() {
} How to expect static error here? I want to check here that there is a compile time error if library name is wrong. But there are no erroneous lines in a |
@munificent, I think it's very likely that the tests that need to have expected errors outside the entry point could be written in such a way that they only have those errors in one library. Would it be easier to adjust the test runner such that it could handle this situation, e.g., based on a directive in the entry point?: // ----- 'my_test.dart'
// ExpectedErrorsIn='my_test_part.dart'
part 'my_test_part.dart';
void main() {}
// ----- my_test_part.dart'
part of 'some_other_library.dart';
// ^^^^^^^^^^^^^^^^^^^^^^^^^
// [analyzer] Some error message about the wrong part-of URI. This might also help avoiding a performance problem: If the test runner always needs to scan through every reachable library to see if it contains any error expectations then this might slow down test runs quite a lot. |
FWIW, this issue accounts for |
Yeah, this isn't a bad idea. I'm a little worried about the failure mode. If you forget the Here's an idea: We could say that imported/part files that contain static error expectations should end in But, also, we can add support to the test runner to report an error if it sees a file that does not contain that suffix (or |
Good point. The requirement that we'd want to enforce here is a property of the test source code alone, so it should be sufficient to check this kind of well-formedness as a presubmit check (no need to do it for every execution of the test runner). We would then detect that a few test entry point libraries have |
SGTM. |
I just ran into this issue too. I think this is a bigger problem than it appears to be. It is very hard to manage these tests, because they show up on the results page as failures. Whenever they get 'tickled' either by new features, or a new config, or a new backend, they become effectively impossible to triage without domain knowledge. |
Actually, this issue is even bigger than I had suspected. Crucially, when failure tests are written with failures outside of the entry library, we do not validate any errors, we only verify that they continue to fail. This results in significantly less test coverage, and causes these tests to bitrot. I've created an experimental CL to crawl local imports / part files and add any static errors uncovered. This CL does not handle the case where the errors are in another package, but it does manage to address over half of these tests. As might be expected, there are a number of issues with some of these failure tests, for example some have the incorrect error strings, some have missing errors, etc. I'm not sure we actually want to try to land the above CL, but I think it is important we fix this before these tests rot more. |
It seems like this issue in the test runner causes us to have failures where both the test and the runtime do the right thing (tests compile time error, runtime results in compile-time error) but the test runner reports it as failure, because it thinks the test should pass. (e.g. This is very problematic as backends want to have no approved failures in the results database, but that cannot be achieved if the test runner has this problem. @munificent as you're assigned as the owner, are you working on a fix for this? |
cc @kallentu |
Alas, no, I don't have any bandwidth for the test runner these days. |
This is probably going to be pretty important for macros. @davidmorgan any interest in learning more about how the test runner works... ? :) |
I think so yes, I have a TODO from yesterday to start looking at language tests for macros and this does indeed look relevant. |
I'd be happy to spend time helping you ramp up on the test runner. Feel free to throw something on my calendar and I can give you a braindump. |
SG; knowledge transfer just before vacation does not seem a winning proposition, so sent an invitation for 2024 ;) |
I resurrected Joshua's PR from #44990 (comment) and the results seem promising, quite a few CompileTimeError->Pass from existing expected CompileTimeError that are already correct: https://dart-ci.firebaseapp.com/cl/345541/2 I started fixing some not-quite-correct co19 tests then noticed they're not in the SDK depot. @munificent guessing here, is the path to land this with co19 fixes: 1) get the test runner change ready, 2) merge the co19 fixes, 3) merge test runner fix a long with DEPS bump so the co19 changes roll at the same time? |
I'm not sure how co19 changes get rolled in. @eernstg would know better how we should manage this transition. |
For the co19 tests I think a good approach would be to create issues on the co19 repo, https://github.com/dart-lang/co19/issues, rather than changing the co19 tests directly. This will allow the usual pipeline to be used (that is: updating the co19 repo and rolling a specific commit into the SDK, thus updating the co19 tests as seen from the SDK). @sgrekhov, do you agree with this recommendation? @davidmorgan, would it be a problem for you or anyone you're aware of that this approach introduces a certain delay (from the time where the test is adjusted and until the time where the corresponding commit is rolled into the SDK)? |
@eernstg yes, I'm agree. @davidmorgan please, file co19 issue with the list of not-quite-correct tests, explain what should be changed and I'll do the work. Alternative is a direct PR to co19 repo |
Sent dart-lang/co19#2494 |
Done in https://dart-review.googlesource.com/c/sdk/+/345541 ... the co19 failure fixes will follow later, so the failures are approved for now. dart-lang/co19#2497 Please reopen if anything seems not as expected :) |
I think augmentation libraries are not included when the local source entities are scanned for test expectations. |
That sounds likely :) @sgrekhov do you happen to have an example test with augmentations libraries I can look at please, so I can see what should work here? |
There is no committed test yet. But please see PR dart-lang/co19#2561
But |
Thanks! https://dart-review.googlesource.com/c/sdk/+/356401 should do it. |
…libraries. [email protected] Change-Id: I52fd157be6ba561f571170ce393d80820b2744dc Bug: #44990 Reviewed-on: https://dart-review.googlesource.com/c/sdk/+/356401 Reviewed-by: Erik Ernst <[email protected]> Auto-Submit: Morgan :) <[email protected]> Commit-Queue: Morgan :) <[email protected]>
It works! Great! Thank you! |
I'd expect the test runner to recognize an error expectation in a library which is not the entry point, and consider an actual occurrence of such an error as a successful test run. For example:
Running a fresh
dartanalyzer
from commit 65fab23, we get the following error message:This error message should be matched by the expectation comment in
scratch_lib.dart
, but we still get the following outcome:In other words, the test runner does not seem to expect that an error expectation comment can occur in any other library of a test than the entry point.
Motivation This can be a problem in practice: Some tests are intended to handle mixed-version programs (some libraries opt in null safety, others opt it out), and they generally need to have an entry point which is opted out (in order to avoid diagnostic messages about reverse imports, optedIn --> outedOut). Next, some errors occur in opted-in code, so they can't be in the entry point.
It is possible that the error expectation comments are supposed to work also in non-entry-point libraries of a test, in which case this issue should be labeled as
type-bug
. However, it could also be considered a new and possibly useful behavior, in which case it should be labeled astype-enhancement
. So I haven't added any labels about this dimension.The text was updated successfully, but these errors were encountered: