-
Notifications
You must be signed in to change notification settings - Fork 1.4k
Add test run duration #1038
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add test run duration #1038
Conversation
* Added test run start to cli * Added test run elapsed calculation to cli and watcher
Restarting tests. |
Test failures are due to a change in Node v6.5 as outlined in #1028. |
That PR has been merged! PS can I have some screenshots of this in action? |
Picture and text format: ➜ ava git:(add-test-run-duration) node cli.js --watch test/fixture/watcher/tap-in-conf/test.js test/fixture/watcher/tap-in-conf
1 passed [16:21:22] (598.783ms)
1 failed
test › fails
Test failed via t.fail()
Test.fn (test.js:8:4)
_combinedTickCallback (internal/process/next_tick.js:67:7)
process._tickCallback (internal/process/next_tick.js:98:9)
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
1 passed [16:21:22] (235.709ms)
1 failed
test › fails
Test failed via t.fail()
Test.fn (test.js:8:4)
_combinedTickCallback (internal/process/next_tick.js:67:7)
process._tickCallback (internal/process/next_tick.js:98:9)
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
1 passed [16:21:32] (209.869ms)
^C% ➜ ava git:(add-test-run-duration) node cli.js test/fixture/pkg-conf/defaults
1 passed (199.979ms)
➜ ava git:(add-test-run-duration) node cli.js test/fixture/long-running.js
2 passed (5.202s) |
@fearphage Why do you need this? What problem does it solve? Why not just run |
It shows you the total duration of a test run.
Another problem with |
That's the solution, not the problem. What will you do with the knowledge of the time it takes to run the test suite? Why do you need it on every run? I'm just trying to figure out why this is essential. |
The problem then is prior to this PR, I had no idea how long all the tests took to run together. There was information that is easily captured that was not being presented to me.
I don't fully understand what you're asking. I will do the same thing I do with the knowledge of the time it takes to run individual tests. That's currently a feature in ava today right now. Do you have a problem with that functionality as well? What's the difference between individual test times and suite run times?
It's beneficial to be able to see at a glance the length of time a test suite takes to run. I have a few questions for you. Why would you not want to see how long your tests take to run? What is it detracting from your code, tests, and/or life? |
It's been a summer without much AVA work for me so I might have forgotten, but IIRC we don't show individual test duration, do we? I would be interested in surfacing which tests are exceedingly slow, in order to help users optimize their test suits. I'm not convinced outputting raw numbers is a sufficient solution, especially given the variability in host machine activity, process scheduling, and test concurrency. |
We show individual test time and the actual time all tests finished in verbose mode. |
I don't see individual test time in my verbose output. |
That feature is there to expose slow tests. We only show it when the tests crosses a set threshold, which currently is 100ms. That's different from showing the total time. You can't do much about the total time unless you know the time of the individual tests.
Because everything we add to the output creates noise and distracts from the other information. We are very conservative about adding things. The default AVA output is intentionally minimal. But we're happy to consider adding things if they have practical usefulness. Convince us ;) |
Only for slow tests. >100ms. |
@@ -61,6 +61,8 @@ function Watcher(logger, api, files, sources) { | |||
this.busy = api.run(specificFiles || files, { | |||
runOnlyExclusive: runOnlyExclusive | |||
}).then(function (runStatus) { | |||
runStatus.testRunElapsed = runStatus.testRunStart && process.hrtime(runStatus.testRunStart); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No point in using process.hrtime()
. Date.now()
is more than accurate enough for this purpose.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this suggestion due to process
not being supported in the browser or is there another reason? I'm just asking, because I'm curious.
My rationale for choosing hrtime
over Date
was providing the most amount of information to the reporters. One can always get milliseconds from nanoseconds, but the reverse is not true.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ping?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think @sindresorhus is saying that Date.now()
is accurate enough. I think that's true for this use case, no point optimizing test runs down to the nanosecond, and there is more cognitive overhead with hrtime
.
That said I don't think there's much value in tracking test duration when in watch mode, especially since the number of test files run can vary greatly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated.
That said I don't think there's much value in tracking test duration when in watch mode, especially since the number of test files run can vary greatly.
In that case, it's not very useful. When you're doing TDD or just updating a single test in general, it makes a lot more sense.
Fair enough. I feel like can do something with the deltas though. If before PR X, tests ran in 1 minute faster, that's a big deal. Also do you want me to add duration to the other reporters? |
That's true. @avajs/core ? I don't think there's much value in tracking test duration when in watch mode, especially since the number of test files run can vary greatly. |
replaced process.hrtime() with Date.now()
return round(ms) + 'ms'; | ||
} | ||
|
||
module.exports = duration; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This could just be pretty-ms
instead.
I'm open to adding the test time to the verbose and TAP reporter, but I'm hesitant to add it to the mini-reporter. The mini-reporter is meant to be as simple as possible, and I personally don't see much value in the test time. And I agree with @novemberborn, no point in having it in watch mode. |
What about adding an option for it? It's inconvenient to have to leave watch mode just to see how long the tests are taking. What's the point of printing the timestamp in watch mode? Is it so you can see that things are changing at a glance? If that is the sole purpose, could this replace the timestamp? It's more useful in terms of testing than showing the time. |
@novemberborn already commented why showing total test time in watch mode is not useful.
Yes, it's to see if something has changed, and no, total test time could not replace it for the same reason as above and the fact that the test time could be the same as previous and you would have to remember the previous test time to be sure it's different. |
Unless I'm missing it, @novemberborn only said s/he didn't find it useful. S/he did not elaborate on the why. Much like novermberborn, I have an opinion too. I believe it is useful. That's the reason I added it and the reason I expressed how inconvenient it would be to have to leave watch mode to figure out how long the tests were taking to run. I'm trying to see if there's a compromise that would allow it to remain in some form or another.
This is a possibility. The lack of precision in the timing adds to that factor. Hitting the same millisecond is definitely possible. The same nanosecond is less probable.
How does this differ from printing the timestamp? By printing the time, you have to remember the time it displayed last time you saw it or know what time it is now including seconds. Neither one of them let you know the tests just ran without using your memory or external factors. |
No, he said why it's objectively useless:
This has nothing to do with precision. Even if we used nanosecond precision we would only show a prettified output to the user. Let's say a test takes 1.4243242 seconds to run the first time and 1.45435353 the second time, we would only show 1.4 second for each. |
I have a use for this "objectively useless" thing.
I had it setup for 3 decimal places so the output in your examples would be |
Either you're not understanding how it's useless or you have a use-case you haven't being able to explain, but I feel this discussion is going nowhere. I would recommend focusing on what we're willing to do and not push this anymore unless you can actually come up with some convincing arguments not already exhausted ;)
|
Really?! Since I already explained the use case a few weeks ago, I guess it's the former in this false dichotomy. Are you going to elaborate on the uselessness of my use case? Help me understand how useless it really is. |
Already said this, but I guess I can repeat myself again, we want it to be
You want the total test run time so you can use the deltas to see if it became faster or slower, but watch mode only runs tests that changed or had their source change, so the delta is not constant, making it useless for this use-case, as the time will vary greatly depending on what tests/sources changed. |
So it seems you do see what precision has to do with this. You prefer the opportunity for repetition more than you want precision. I understand. It seems unfair to complain about a situation that you requested.
That's precisely what I want. Imagine I'm writing |
@fearphage given how watch mode functions I'm sceptical whether it'll actually deliver the information as reliably as you're expecting it to. I'm also wary it encourages micro-optimization of test runs. Again, we're happy to have this in regular output since large performance regressions will surface that way, but at this point we don't want this for watch mode. |
Understandably so. I've been using it currently and it's delivering the expected results thus far. Does ava support external reporters currently? |
|
4bc021e
to
476c653
Compare
8610723
to
4acb78f
Compare
Changes
This does 2 new things:
I added 2 properties to the
runStatus
:testRunStart
testRunElapsed
Duration Utility
hrtimeToDuration
- convertsprocess.hrtime()
to durationhttimeToDuration.duration
- converts milliseconds to durationMini Reporter
In the mini reporter, I convert the
hrtime
back to a human-readable duration to show the end user. This could be added to any reporter, but I started with just this one.Notes
Currently I'm calculating the elapsed time in two places (
cli
andwatcher
). The best place to calculate this was in the logger (Logger.prototype.finish
). However the lgoger seems to be logic-less and a complete pass-through so I didn't want to add functionality there.