-
Notifications
You must be signed in to change notification settings - Fork 1.7k
Dynamic unawaited_futures
?
#48419
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
/cc @pq, @natebosch, @parren-google, based on some recent comments on the related issues. |
That sound undecidable. Probably undesirable. And definitely expensive! I'd be vary about building a run-time check into the language. It's not clear which behavior is actually being proposed. It's not even clear what it means to ignore a future.
That's a very big "if". I have no idea where to even begin. Would it only apply to (All futures are eventually ignored, usually after their callbacks have triggered. The code that runs those callbacks has a reference to the future, and it does noting with that future after reading out the state and result.) Unless we make GC check whether a future "has been used", we can't definitely say anything. Without involving GC, we'd have to insert checks where a future might be dropped. Sometimes, a future should be ignored. For example, the future returned by It's also very hard to ensure that a future is not dropped without completely disallowing assignment from Without that, the current (But, if someone can actually specify "unused future" in a way that makes sense, I'd be happy to see that!) |
Given the challenges - and assuming something like "make So these shoud be covered (assuming
But not:
If there are frequent pitfalls related to variables holding futures, we could also consider a linter that disallows inference of the If we are open to more radical changes, then we could consider a new type, like |
@lrhn, you're assuming a lot of magic. There is no proposal here that requires solving the halting problem. So let me explain a little more about how simple this proposal actually is. ;-) Note that this is exclusively intended to be used during debugging and test runs, it is not likely to be useful in deployed systems. The intention is that it should reveal that some future related computations are delayed because they are never awaited. The assumption is that a noteworthy subset of the Dart applications out there will be designed to use asynchrony, but not to have any code executed after We consider each program execution to consist of two phases: (1) Execution of In the example, (Actually, it might be even simpler if we tie the run-time error directly to the completion of a future: If we're after It's a little bit like a memory leak checker: For C/C++ applications whose design is intended to maintain the invariant that every piece of dynamically allocated memory must be freed exactly once during the time where There are lots of issues that don't exist:
During debugging, getting a run-time error which serves as a witness that something went wrong is better than not knowing that there is a problem at all.
We may wish to subset the mechanism, but as a starting point it would cause execution of code that leads to completion of any future to throw if it occurs after
Right, that was the example I gave, using a non-async
That's basically the reason why I think we can raise an error.
I don't see why we would need to consider GC: When
If it were a problem to keep track of the futures whose associated computation will be executed after
For that we have
If this were a problem then it would also prevent running the code after |
@parren-google wrote:
I certainly think we should use static analysis, too: It flags locations where the code is likely (or guaranteed) to discard an object which is known to be a future, so we get an early heads up, near the source of the problem. The dynamic approach which is the topic of this issue complements the static analysis: It detects likely bugs at a very late point in time (which is inconvenient), but it is complete in the sense that it will report every case where a non- So it's again a little bit like the memory leak detection: The dynamic approach will detect that there is a problem no matter how it happened, whereas a static analysis is guaranteed to give up before it solves the halting problem. Knowing that there is a problem at all, we'd need to do all kinds of smart debuggy things to detect how it happened. The ability to create cross-asynchrony stack traces could be very helpful here. |
@eernstg That makes some sense. After I don't actually think that approach will work well for real programs. The So yes, I assumed magic, because this was not an option I even considered. Anything that would actually be correct, would be magic. |
It is true that some programs will initiate some asynchronous computations and rely on post-main execution. However, developers of those programs would then simply never enable this feature. So the question is: How large is the set of programs for which all post-main computation is actually considered to be a bug (an async leak)? If that's a reasonably large set of programs then it may be worthwhile. But it also sounds like we could just make it a programming idiom: If you want to detect async leaks then you need to have these three lines of code in your |
It should be fairly simple to make such a import "dart:async";
Future<T> preventAsyncLeak<T>(Future<T> Function() action) {
bool done = false;
void checkNotDone() {
if (done) throw StateError("Asynchronous computation after end");
}
return runZoned(() => action().whenComplete(() {
done = true;
}), zoneSpecification: ZoneSpecification(
registerCallback: <R>(self, parent, zone, f) {
return parent.registerCallback(zone, () {
checkNotDone();
return f();
});
},
registerUnaryCallback: <R, S>(self, parent, zone, f) {
return parent.registerUnaryCallback(zone, (a) {
checkNotDone();
return f(a);
});
},
registerBinaryCallback: <R, S, T>(self, parent, zone, f) {
return parent.registerBinaryCallback(zone, (a, b) {
checkNotDone();
return f(a, b);
});
}
run: <R>(self, parent, zone, f) {
checkNotDone();
return parent.run(zone, f);
},
runUnary: <R, T>(self, parent, zone, f, a) {
checkNotDone();
return parent.runUnary(zone, f, a);
},
runBinary: <R, S, T>(self, parent, zone, f, a, b) {
checkNotDone();
return parent.runBinary(zone, f, a, b);
},
));
}
/// Example
void main() {
preventAsyncLeak(() async {
scheduleMicrotask(() {
print("Microtask!"); // Is printed
scheduleMicrotask(() {
print("Microtask 2!"); // Throws before printing.
});
});
});
} It'll probably not catch everything (Platform |
Cool, thank you! |
So here's a concrete approach which will do what I had in mind: The basic assumption is that some asynchronous programs are designed to await all their futures during the execution of So let's say that we have a program where an async leak is defined to be a bug. Here's a simple example: // Stored in 'main.dart'.
Future<void> f() async {
await null; // Force asynchronous execution.
print('f runs!');
}
void g() => f(); // Ignores a future.
Future<void> main() async {
g();
print('main runs!');
} We can run this program ( With real-world software, of course, we may not be able to detect that this kind of bug exists, because the post- So we apply the following standard debugging technique. We write a tiny wrapper program: // Stored in 'debugMain.dart'.
import 'prevent_async_leak.dart';
import 'main.dart' as mainLib;
void main() => preventAsyncLeak(mainLib.main); The library 'prevent_async_leak.dart' contains the following, basically taken from @lrhn's example here: prevent_async_leak.dartimport "dart:async";
Future<void> preventAsyncLeak<T>(Future<void> Function() main) {
bool done = false;
void checkNotDone() {
if (done) throw StateError("Asynchronous computation after end");
}
var action = () async => scheduleMicrotask(main);
return runZoned(
() => action().whenComplete(() {
done = true;
}),
zoneSpecification: ZoneSpecification(
registerCallback: <R>(self, parent, zone, f) {
return parent.registerCallback(zone, () {
checkNotDone();
return f();
});
},
registerUnaryCallback: <R, S>(self, parent, zone, f) {
return parent.registerUnaryCallback(zone, (a) {
checkNotDone();
return f(a);
});
},
registerBinaryCallback: <R, S, T>(self, parent, zone, f) {
return parent.registerBinaryCallback(zone, (a, b) {
checkNotDone();
return f(a, b);
});
},
run: <R>(self, parent, zone, f) {
checkNotDone();
return parent.run(zone, f);
},
runUnary: <R, T>(self, parent, zone, f, a) {
checkNotDone();
return parent.runUnary(zone, f, a);
},
runBinary: <R, S, T>(self, parent, zone, f, a, b) {
checkNotDone();
return parent.runBinary(zone, f, a, b);
},
));
} This program runs for a while, prints `main runs!', and then throws (just when it's about to print 'f runs!'). So we have now detected that there is an async leak in this program. Fixing the bug is a matter of normal debugging, so we don't get any particular help for that. But the point is that it is useful to know that this bug exists at all. |
The lint
unawaited_futures
is intended to flag situations where a future is the result of a computation in a function body markedasync
, and it is then discarded rather than being awaited. That situation is likely to be a bug, because it allows computations associated with that future to occur much later (rather than during the execution of anawait
on the future immediately when it is received, e.g., as the return value of a function invocation), and the reordering of side effects may give rise to bugs that are very subtle and hard to fix.However, recent discussions (and older ones, actually) have demonstrated that there are many situations where a future is thus ignored, and it may even be impossible to detect the buggy situations among all those situations using static analysis. Cf. #58656, #58650, #57769, #57653.
This obviously gives rise to the question "Could we introduce a dynamic check that detects discarded futures?" A dynamic check could be complete in the sense that it could catch every discarded future, if we can characterize such futures.
Consider the following example:
This program prints 'main runs!' followed by 'f runs!'. The idea would be that the latter causes a dynamic error a la "Someone discarded a future!", rather than an execution of
await fut
wherefut
is the future which is returned by the execution off
(and discarded inmain
).We might need the ability to execute a method
canBeIgnored()
(an extension method, presumably) on a future (presumably requiring it to be a system provided future, not an instance of a user-written class), and this would allow the future to be awaited aftermain
has returned. For any future where this method wasn't executed, the dynamic error would occur.Can we do this? @lrhn, what do you think?
The text was updated successfully, but these errors were encountered: