Skip to content

test runner for basilisp.test #980 #1044

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 6 commits into
base: main
Choose a base branch
from

Conversation

mitch-kyle
Copy link
Contributor

@mitch-kyle mitch-kyle commented Sep 10, 2024

Implements #980

  • basilisp.test
    • Add test report protocol
    • Add run-tests, run-all-tests
    • Add with-fixtures, compose-fixtures, and join-fixtures
    • Add instance? assertion
  • basilisp.contrib.nrepl-server.test
    • Add test, test-all, and retest ops. It's working with emacs-cider

Comment on lines +31 to +37
(def ^:dynamic *test-assertions* nil)

(def ^{:deprecated true
:dynamic true}
*test-failures*
"Deprecated. Use :lpy:var:`*test-assertions*` instead."
nil)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I changed the name of *test-failures* to *test-assertions* since we need to track more than just failures.

Existing gen-assert methods will continue to work as expected as *test-failures* refers to the same value as *test-assertions*

(reduce #(compose-fixtures %1 %2) fixtures)
(constantly nil)))

(defn assert!
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

assert!, pass!, fail!, and error! are convenience functions for building assertions, they reduce some duplicate code

:line line-num
:type type)))

(defn pass!
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

gen-assert methods are now expected to call pass! to indicate a success but it's not necessary. This change should be backwards compatible with existing gen-assert methods.

(pass! (quote ~expr) ~msg ~line-num)
(fail! (quote ~expr) ~msg ~line-num expected# actual#))))

(defmethod gen-assert 'instance?
Copy link
Contributor Author

@mitch-kyle mitch-kyle Sep 10, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just added this for a nicer reporting of type assertions. I can split it out if that's preferred.

has-completion? (set completions)]
(is (= id @id*))
(is (= status ["done"]))
(are [expected] (has-completion? expected)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I changed these tests to check a random subset of the completions in basilisp.test so the namespaces can expand without breaking these tests.

" 3 > | \n",
" 4 > | (\x1b[38;5;34ma\x1b[39m)\n",
" 5 | (\x1b[38;5;129;01mlet \x1b[39;00m[\x1b[38;5;136ma\x1b[39m\x1b[38;5;250m \x1b[39m\x1b[38;5;241m5\x1b[39m]\n",
" 6 | \x1b[38;5;250m \x1b[39m(\x1b[38;5;34mb\x1b[39m))\n",
Copy link
Contributor Author

@mitch-kyle mitch-kyle Sep 10, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure why this test started failing for me. If it's a TERM issue then I'll revert this and set the variable explicitly for the test.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

Copy link
Member

@chrisrink10 chrisrink10 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the future, I'd really prefer if you could reach out before submitting a PR with such significant changes. I do appreciate that people are eager to contribute to the project, but as the primary maintainer it's important to me that I both understand and agree with all of the changes being merged into the project.

There are some simple and uncontroversial changes in this changeset that I'd be glad to review and merge separately, namely:

  • Expanded fixture support
  • The instance? assertion
  • The general cleanup of the testing logic using the new fail!, etc. macros
  • The various small test fixes

However, there are some other things which I have some reservations about. In particular, you're proposing creating a separate reporting and test collection mechanisms alongside the existing logic without any real justification for it provided here or in the linked issue. Is there a reason we didn't use PyTest for this?

@mitch-kyle
Copy link
Contributor Author

In the future, I'd really prefer if you could reach out before submitting a PR with such significant changes. I do appreciate that people are eager to contribute to the project, but as the primary maintainer it's important to me that I both understand and agree with all of the changes being merged into the project.

Sorry, I had created issue #980 to announce my intentions for this work. This PR is large and can be split into multiple but it represents the implementation of the whole issue. I'll try to add more detailed explanations in future.

There are some simple and uncontroversial changes in this changeset that I'd be glad to review and merge separately, namely: ...

I will separate these out

However, there are some other things which I have some reservations about. In particular, you're proposing creating a separate reporting and test collection mechanisms alongside the existing logic without any real justification for it provided here or in the linked issue. Is there a reason we didn't use PyTest for this?

I didn't use PyTest for this because it's only entry point seems to be pytest.main which is limiting for two reasons:

  1. It only returns an exit code. This is the main problem. We need a detailed breakdown of the assertions in order to build tooling around them. The new reporting system has the flexibility to meet this need.
  2. It does expensive test collection, it searches for and loads files. Ideally running tests in the repl should be about as expensive as running a function. We don't need to search for then reload the file (which might fail if the file is in an unreadable state as can happen during repl development).

I wanted to avoid modifying PyTest testrunner.py as much as possible but I realize this means there are now two test running systems in Basilisp. Would you be open to changing the existing test runner to use the new mechanisms as much as possible?

@chrisrink10
Copy link
Member

Sorry, I had created issue #980 to announce my intentions for this work.

I was unclear you intended to work on it or how, so just in the future it would be helpful to let me know and for something larger just maybe describe how you plan to handle it.

This PR is large and can be split into multiple but it represents the implementation of the whole issue.

It does represent the implementation of the entire issue, but the issue itself names 2-3 independent changes. My preference is to keep smaller issues and PRs whenever possible. I feel ok with linking multiple PRs to the same issue for now, but in general I prefer to just create several smaller issues. To my mind it makes it easier to review, easier to reason about, and easier to test.

I didn't use PyTest for this because it's only entry point seems to be pytest.main which is limiting for two reasons:

  1. It only returns an exit code. This is the main problem. We need a detailed breakdown of the assertions in order to build tooling around them. The new reporting system has the flexibility to meet this need.
  2. It does expensive test collection, it searches for and loads files. Ideally running tests in the repl should be about as expensive as running a function. We don't need to search for then reload the file (which might fail if the file is in an unreadable state as can happen during repl development).

Thank you for the explanation.

As to (1), PyTest provides many hooks that may be able to provide us with information. A bit of searching and I found that we could easily create a plugin that gets passed to pytest.main which collects the information in question.

As to point (2), is this true? Can we substantiate that it is actually slower or more expensive? When I just run pytest directly (not involving Tox and not disabling the namesapce cache) it seems to collect files and execute tests quickly.

I guess philosophically my other concern is if we have 2 test runners and they set up the test environment differently and tests fail in one but not the other, then that's confusing for users and ultimately quite frustrating for me since it will result in bug reports and chasing down incompatibilities between these two test runners.

@mitch-kyle
Copy link
Contributor Author

mitch-kyle commented Sep 19, 2024

I was unclear you intended to work on it or how, so just in the future it would be helpful to let me know and for something larger just maybe describe how you plan to handle it.

Ok, I'll try to be more active and vocal.

As to (1), PyTest provides many hooks that may be able to provide us with information. A bit of searching and I found that we could easily create a plugin that gets passed to pytest.main which collects the information in question.

That's true but we still need to define some kind of interface for the nrepl function to hook into unless we wanted to create two plugins. Would it make more sense to create a flexible test runner in basilisp.test and re-implement the PyTest test runner using it? clojure.test doesn't depend on JUnit, maybe it makes sense to keep basilisp.test decoupled from PyTest.

As to point (2), is this true? Can we substantiate that it is actually slower or more expensive? When I just run pytest directly (not involving Tox and not disabling the namesapce cache) it seems to collect files and execute tests quickly.

I can do more research, for a namespace with one trivial test should be able to run UI instantaneously. The difference in repl experience is worth considering as well. For example, if I write and evaluate a test in a comment block, I should be able to execute that test from the repl. If the test has to be found by reloading that module then test wouldn't be loaded and that repl experience is broken. I'll check for this as well.

I guess philosophically my other concern is if we have 2 test runners and they set up the test environment differently and tests fail in one but not the other, then that's confusing for users and ultimately quite frustrating for me since it will result in bug reports and chasing down incompatibilities between these two test runners.

I understand the desire for one system to maintain.

Edit: I was thinking more about this overnight. One big advantage of keeping basilisp.test decoupled from PyTest is in the future Basilisp may not need PyTest as a dependency. At some point the PyTest testrunner.py could be packaged separately removing the need for users to include the PyTest dependency in their packages.

chrisrink10 pushed a commit that referenced this pull request Sep 22, 2024
I've separated out the test fixture functions from #1044 for #980
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants