-
-
Notifications
You must be signed in to change notification settings - Fork 12
test runner for basilisp.test #980 #1044
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
(def ^:dynamic *test-assertions* nil) | ||
|
||
(def ^{:deprecated true | ||
:dynamic true} | ||
*test-failures* | ||
"Deprecated. Use :lpy:var:`*test-assertions*` instead." | ||
nil) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I changed the name of *test-failures*
to *test-assertions*
since we need to track more than just failures.
Existing gen-assert
methods will continue to work as expected as *test-failures*
refers to the same value as *test-assertions*
(reduce #(compose-fixtures %1 %2) fixtures) | ||
(constantly nil))) | ||
|
||
(defn assert! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
assert!
, pass!
, fail!
, and error!
are convenience functions for building assertions, they reduce some duplicate code
:line line-num | ||
:type type))) | ||
|
||
(defn pass! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
gen-assert
methods are now expected to call pass!
to indicate a success but it's not necessary. This change should be backwards compatible with existing gen-assert
methods.
(pass! (quote ~expr) ~msg ~line-num) | ||
(fail! (quote ~expr) ~msg ~line-num expected# actual#)))) | ||
|
||
(defmethod gen-assert 'instance? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just added this for a nicer reporting of type assertions. I can split it out if that's preferred.
has-completion? (set completions)] | ||
(is (= id @id*)) | ||
(is (= status ["done"])) | ||
(are [expected] (has-completion? expected) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I changed these tests to check a random subset of the completions in basilisp.test
so the namespaces can expand without breaking these tests.
tests/basilisp/source_test.py
Outdated
" 3 > | \n", | ||
" 4 > | (\x1b[38;5;34ma\x1b[39m)\n", | ||
" 5 | (\x1b[38;5;129;01mlet \x1b[39;00m[\x1b[38;5;136ma\x1b[39m\x1b[38;5;250m \x1b[39m\x1b[38;5;241m5\x1b[39m]\n", | ||
" 6 | \x1b[38;5;250m \x1b[39m(\x1b[38;5;34mb\x1b[39m))\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure why this test started failing for me. If it's a TERM
issue then I'll revert this and set the variable explicitly for the test.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the future, I'd really prefer if you could reach out before submitting a PR with such significant changes. I do appreciate that people are eager to contribute to the project, but as the primary maintainer it's important to me that I both understand and agree with all of the changes being merged into the project.
There are some simple and uncontroversial changes in this changeset that I'd be glad to review and merge separately, namely:
- Expanded fixture support
- The
instance?
assertion - The general cleanup of the testing logic using the new
fail!
, etc. macros - The various small test fixes
However, there are some other things which I have some reservations about. In particular, you're proposing creating a separate reporting and test collection mechanisms alongside the existing logic without any real justification for it provided here or in the linked issue. Is there a reason we didn't use PyTest for this?
Sorry, I had created issue #980 to announce my intentions for this work. This PR is large and can be split into multiple but it represents the implementation of the whole issue. I'll try to add more detailed explanations in future.
I will separate these out
I didn't use PyTest for this because it's only entry point seems to be
I wanted to avoid modifying PyTest |
I was unclear you intended to work on it or how, so just in the future it would be helpful to let me know and for something larger just maybe describe how you plan to handle it.
It does represent the implementation of the entire issue, but the issue itself names 2-3 independent changes. My preference is to keep smaller issues and PRs whenever possible. I feel ok with linking multiple PRs to the same issue for now, but in general I prefer to just create several smaller issues. To my mind it makes it easier to review, easier to reason about, and easier to test.
Thank you for the explanation. As to (1), PyTest provides many hooks that may be able to provide us with information. A bit of searching and I found that we could easily create a plugin that gets passed to As to point (2), is this true? Can we substantiate that it is actually slower or more expensive? When I just run I guess philosophically my other concern is if we have 2 test runners and they set up the test environment differently and tests fail in one but not the other, then that's confusing for users and ultimately quite frustrating for me since it will result in bug reports and chasing down incompatibilities between these two test runners. |
Ok, I'll try to be more active and vocal.
That's true but we still need to define some kind of interface for the nrepl function to hook into unless we wanted to create two plugins. Would it make more sense to create a flexible test runner in
I can do more research, for a namespace with one trivial test should be able to run UI instantaneously. The difference in repl experience is worth considering as well. For example, if I write and evaluate a test in a comment block, I should be able to execute that test from the repl. If the test has to be found by reloading that module then test wouldn't be loaded and that repl experience is broken. I'll check for this as well.
I understand the desire for one system to maintain. Edit: I was thinking more about this overnight. One big advantage of keeping |
Implements #980
basilisp.test
run-tests
,run-all-tests
with-fixtures
,compose-fixtures
, andjoin-fixtures
instance?
assertionbasilisp.contrib.nrepl-server.test
test
,test-all
, andretest
ops. It's working with emacs-cider