-
Notifications
You must be signed in to change notification settings - Fork 68
Test bindings using both Dart-only and Dart+C modes #685
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
We have mainly 2 type of tests that run end-to-end, on bindings.
The "Golden file" tests generate and compare reference bindings for given config. It's so common that doing it is a one line [ You can probably commit both types of bindings as reference bindings, and then modify this function to compare both reference bindings. For "Runtime" bindings, however, it probably leads to some amount of code duplication, because imports will be different. There's also the issue that this will lead to ever-inflating number of committed bindings, which is not great for PR review workflow. I believe there's another viable option: generating majority of test suite at runtime; Something like:
Besides running tests, normal lints and format checks can also be run. There will always be a minimal set of golden-file bindings in |
When implementing #669, it's desirable to test both Dart+C and Dart-only bindings. So what implementation strategy should I go with?
|
Let's focus on "runtime" tests. I think we don't want to duplicate the tests, so we would want to generate the bindings on the fly and test them using the same file. We might even generate the pure-dart and c-based ones in the same path so we don't even have to change the import. Another test could generate the bindings, analyze them, test them with the golden tests (if exist), and run the runtime tests for the generated bindings. |
I think there's still one advantage we gain by duplicating tests - consistent reporting. We run These are some requirements in descending order of priority
I think the design I proposed above meets most of these requirements. But it's a very unconventional layout, and separates runtime tests from One thing I am still not sure about is intellisense. That might need some nested analysis-options trickery. We basically want analysis to happen only if the bindings were generated by the tool. |
Why not? We could remove the C-based bindings committed by default and change it to pure-Dart ones and rerun the runtime tests. This would solve some of our intellisense issues. Edit: although we want to use other conditions. |
Because test would alter repository state, which may affect later tests, or cause the altered file to be committed accidentally. It's useful to think like this: the generated bindings which are checked in must be a "pure function" of config and source which are also checked in. Edit: This may need clarification. You can revert the changes at the end, but that doesn't help if the test was interrupted, or other golden-file tests running interleaved with bindings tests. I would like to leave any checked-in file unmodified than creating further problems down the line. |
How about this:
// dart run jnigen:setup
// generateReplicatedTests()
// dart test, stream results to stdout
// delete replicated bindings |
That's good. We can have some |
I'd suggest we just wrap all these into single script as above described, and either error on directly invoking The other choice is letting |
Any reason to combine the two? My reasoning for having a separate steps: |
I actually ended up following the way you suggested, albeit with a few changes. When This is kind of similar to how we separated This means VS code UI for running individual tests doesn't work very nicely, however. (I always use the command line, so never thought about it). |
Currently some tests are happening in Dart-only mode and some other are happening in Dart+C. We're only running
dart analyze
on the other mode.The text was updated successfully, but these errors were encountered: