-
Notifications
You must be signed in to change notification settings - Fork 9
Add first version of pytest infrastructure #157
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add first version of pytest infrastructure #157
Conversation
@isteinbrecher this is a work in progess - I've added the first pytest fixtures and also enabled the tagging approach. This turned out to be much more complicated than anticipated. I wanted that the "standard" tests would always execute - this approach was not doable with https://pypi.org/project/pytest_tagging/ Therefore, I simply implemented my own version which turns out to be very easy and doable The tests can now be executed with:
It is also possible to add multiple tags to tests and also execute multiple tags at once I am happy with the current status and I'll add the reference comparison function soon |
Also pytest coverage is added and works as expected - afterwards we can simply add
(Currently I get differing results in the Github actions and local executions -> locally it looks like this:) Click me
|
68ef0ab
to
ab87cef
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@davidrudlstorfer thanks a lot for the effort! This looks really great and I really like it. I left some comments. Do you envision switching only a few modules in this PR or more? See my previous comment, we could simply merge the pytest
config and already execute the "old" tests and the port file by file, but that is just a suggestion.
@isteinbrecher thanks for your first round of review of the current status In this PR I only wanted to update the current pytest (cosserat curve) and out of curiosity I've ported one additional feature (rotations). The remaining parts should be done in another PR The only thing missing here is the implementation of the comparison function. As we've already briefly discussed I want to implement one function which automatically detects if either a string or file is handed over and then automatically compares the files or the string. I'll add that soon |
@davidrudlstorfer we already have that, the |
9b96b09
to
747da4b
Compare
@isteinbrecher this would be ready for a review from your side. In the last commit I've now added my suggestion for the new comparison feature. In my opinion this has a few advantages to the old version:
I've tested every path of the implementation and everything works accordingly! I would be very happy if we could converge on a final pytest testing framework with the current suggested changes. |
This would now also enable to port everything to the new testing framework - as well as the deletion of the coverage configs and the test utils |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@davidrudlstorfer thanks a lot for the work on the modernisation of the testing framework. I left some comments, maybe it makes sense to discuss them in person at some point. I would just like to get the functionality best fit for the type of testing done in MeshPy. Please don't get discourage by my comments, I really like this whole approach and greatly appreciate your effort!
Other question, if I run pytest
it takes a couple of seconds to gather
the tests. Do you have an idea how this could be speed up. This doesn't matter for CI, only for local testing.
@isteinbrecher big thanks for your second round of review! Let's discuss the few open points in person to faster converge on one solution :)
I think this is a combination of many things here. First of all it takes so long because https://stackoverflow.com/questions/16417546/how-to-speed-up-pytest
The total test time should, therefore, stay the same in comparison to the current testing approach. The current testing suite takes around 3s on my machine and the current pytests take around 1s for collecting and running. I think once everything is ported we can look at the total execution time again. Also after restructuring the repo and improving the imports this should be even faster. |
9845eab
to
09bcbae
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@davidrudlstorfer thanks for the changes, I am not sure if the PR was already ready for review, I sill left some comments 😅.
@isteinbrecher sorry if it wasn't clear from my side: I am still working on the changes we discussed yesterday I hope to get them ready this evening or tomorrow morning They will adress most of the points raised above, i.e., a very fancy way vastly simplify the result comparison with reference files Currently its a real WIP - once the major things are resolved I'll push it 🚀 EDIT: I also brought back the functionality regarding the diffs in VSCode which now works in a more general way |
I am sorry, I didn’t have a close enough look before reviewing 😅 |
8be1559
to
905a94f
Compare
Of course, it took much longer than expected - it felt like 50 pipeline runs 🥲 Now everything works and the artifact upload works for all containers (macOS, Windows, ubuntu, 4C container) I've adressed all of your comments and this would be ready for your next review @isteinbrecher |
This would be a failed pipeline run where you can download the artifact from each job https://github.com/imcs-compsim/meshpy/actions/runs/12692452173 |
@davidrudlstorfer thanks for the work, I think we are very close to merging this. I took the liberty and added two commits to the branch, as a suggestion:
This commit allows for comparison of all possible combinations of
While adding the changes form the previous commit, I realised that I missed the changes in the While looking at this, I saw that we had some (at least to me) not so intuitive behaviours in the code: We had functions that return True or raise an error, but never return False. At least to me that seemed weird. I tried to unify this, that the compare functions we actually will call from within the testing framework now either run without complaining or raise an assertion. This also allows to simply call the renamed compare function in the testing with assert_results_equal(... Without the leading I am curios about your opinion on these changes. |
9766fde
to
b60728d
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@isteinbrecher thanks a lot for extending the functions!
Unfortunately, I do not fully agree with you on all changes. In many cases a lot of complexity and nested functions were added where it is not necessary at all. Also functions should always return one thing and not just empty strings depending on the outcome.
We could probably increase the simplicity of the code by a lot. I also fully agree with you if you decide to merge the current changes.
If you like I am more than happy to also provide a suggestion on my comments from above - I think in the end it is worth it if we provide a really nice testing framework 🙏🏻 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for your comments @davidrudlstorfer. I left some explanations. I will add another commit trying to revert my changes to address most of your major points. We can use that as a starting point for further discussion or simply revert it again.
@isteinbrecher thanks for adding the changes and explanations! If I understand you correctly your major concern is that you want to use the function externally as well. What do you think if we just provide a wrapper for the external use? I added this in the last commit - this function now does exactly the same thing you implemented but is just a wrapper around the standard This way the function signature is identical within meshpy for I am open for a better name of the function which also returns stuff 😅 Curious about your opinion |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me! I left two minor comments, other than that I think we can proceed and merge this. The main functionality should work as expected and we can fine tune the implementation later on if needed.
Thanks again for the effort to modernise the testing framework @davidrudlstorfer .
I will open an issue to track the test modules that still need to be ported.
Ps: I suggest to combine everything in a single, or some very few commits.
Perfect 🤩 I've addressed the 2 minor remarks. I'll squash everything into a few manageable commits and then force push it |
- Provide pytest infrastructure including - Tags/Markers for different test scenarious - Fixtures to simplify testing - Comparison utils for all sort of types - Port rotation tests to pytest - Port cosserat curve to pytest - Update github actions - Simplify testing procedure - Update artifact upload - Remove files from old testing framework
6aa52a7
to
85deef8
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks again for the effort, lets get this merged!
Includes first fixtures and tagging approach