Skip to content

Add test run duration #1038

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 4 commits into from
Closed

Conversation

fearphage
Copy link

Changes

This does 2 new things:

  • Send test run duration to reporters
  • Show test duration in mini reporter (I added a utility to calculate this)

I added 2 properties to the runStatus:

  • testRunStart
  • testRunElapsed

Duration Utility

  • hrtimeToDuration - converts process.hrtime() to duration
  • httimeToDuration.duration - converts milliseconds to duration

Mini Reporter

In the mini reporter, I convert the hrtime back to a human-readable duration to show the end user. This could be added to any reporter, but I started with just this one.

Notes

Currently I'm calculating the elapsed time in two places (cli and watcher). The best place to calculate this was in the logger (Logger.prototype.finish). However the lgoger seems to be logic-less and a complete pass-through so I didn't want to add functionality there.

* Added test run start to cli
* Added test run elapsed calculation to cli and watcher
@fearphage
Copy link
Author

Restarting tests.

@fearphage fearphage closed this Sep 17, 2016
@fearphage fearphage reopened this Sep 17, 2016
@fearphage
Copy link
Author

Test failures are due to a change in Node v6.5 as outlined in #1028.

@sotojuan
Copy link
Contributor

That PR has been merged! PS can I have some screenshots of this in action?

@fearphage
Copy link
Author

Picture and text format:

screenshot

➜  ava git:(add-test-run-duration) node cli.js --watch test/fixture/watcher/tap-in-conf/test.js test/fixture/watcher/tap-in-conf                                                                                                                               

  1 passed [16:21:22] (598.783ms)
  1 failed

  test › fails
  Test failed via t.fail()
    Test.fn (test.js:8:4)
    _combinedTickCallback (internal/process/next_tick.js:67:7)
    process._tickCallback (internal/process/next_tick.js:98:9)


───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

  1 passed [16:21:22] (235.709ms)
  1 failed

  test › fails
  Test failed via t.fail()
    Test.fn (test.js:8:4)
    _combinedTickCallback (internal/process/next_tick.js:67:7)
    process._tickCallback (internal/process/next_tick.js:98:9)


───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

  1 passed [16:21:32] (209.869ms)


^C%                                                                                                                                                                                                                                                            ➜  ava git:(add-test-run-duration) node cli.js test/fixture/pkg-conf/defaults                                                                                                                                                                                  

  1 passed (199.979ms)

➜  ava git:(add-test-run-duration) node cli.js test/fixture/long-running.js          

  2 passed (5.202s)

@sindresorhus
Copy link
Member

sindresorhus commented Sep 17, 2016

@fearphage Why do you need this? What problem does it solve? Why not just run $ time ava when you want to know the total test run time?

@fearphage
Copy link
Author

What problem does it solve?

It shows you the total duration of a test run.

Why not just $ time ava when you want to know the total test run time?

  • How do you use that with --watch exactly?
  • time works horribly in windows (read: not at all).

Another problem with time is it includes the timings of the scaffolding and setup for ava itself as well as any tear down or waiting for file descriptors to close or whatever the case may be. I've tried to make this time measurement as close to the tests themselves running and finishing.

@sindresorhus
Copy link
Member

It shows you the total duration of a test run.

That's the solution, not the problem. What will you do with the knowledge of the time it takes to run the test suite? Why do you need it on every run? I'm just trying to figure out why this is essential.

@fearphage
Copy link
Author

That's the solution, not the problem.

The problem then is prior to this PR, I had no idea how long all the tests took to run together. There was information that is easily captured that was not being presented to me.

What will you do with the knowledge of the time it takes to run the test suite?

I don't fully understand what you're asking. I will do the same thing I do with the knowledge of the time it takes to run individual tests. That's currently a feature in ava today right now. Do you have a problem with that functionality as well? What's the difference between individual test times and suite run times?

Why do you need it on every run?

It's beneficial to be able to see at a glance the length of time a test suite takes to run.

I have a few questions for you. Why would you not want to see how long your tests take to run? What is it detracting from your code, tests, and/or life?

@novemberborn
Copy link
Member

I will do the same thing I do with the knowledge of the time it takes to run individual tests. That's currently a feature in ava today right now.

It's been a summer without much AVA work for me so I might have forgotten, but IIRC we don't show individual test duration, do we?

I would be interested in surfacing which tests are exceedingly slow, in order to help users optimize their test suits. I'm not convinced outputting raw numbers is a sufficient solution, especially given the variability in host machine activity, process scheduling, and test concurrency.

@sotojuan
Copy link
Contributor

We show individual test time and the actual time all tests finished in verbose mode.

@novemberborn
Copy link
Member

I don't see individual test time in my verbose output.

@sindresorhus
Copy link
Member

sindresorhus commented Sep 20, 2016

I don't fully understand what you're asking. I will do the same thing I do with the knowledge of the time it takes to run individual tests. That's currently a feature in ava today right now. Do you have a problem with that functionality as well? What's the difference between individual test times and suite run times?

That feature is there to expose slow tests. We only show it when the tests crosses a set threshold, which currently is 100ms. That's different from showing the total time. You can't do much about the total time unless you know the time of the individual tests.

I have a few questions for you. Why would you not want to see how long your tests take to run? What is it detracting from your code, tests, and/or life?

Because everything we add to the output creates noise and distracts from the other information. We are very conservative about adding things. The default AVA output is intentionally minimal. But we're happy to consider adding things if they have practical usefulness. Convince us ;)

@sindresorhus
Copy link
Member

I don't see individual test time in my verbose output.

Only for slow tests. >100ms.

@@ -61,6 +61,8 @@ function Watcher(logger, api, files, sources) {
this.busy = api.run(specificFiles || files, {
runOnlyExclusive: runOnlyExclusive
}).then(function (runStatus) {
runStatus.testRunElapsed = runStatus.testRunStart && process.hrtime(runStatus.testRunStart);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No point in using process.hrtime(). Date.now() is more than accurate enough for this purpose.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this suggestion due to process not being supported in the browser or is there another reason? I'm just asking, because I'm curious.

My rationale for choosing hrtime over Date was providing the most amount of information to the reporters. One can always get milliseconds from nanoseconds, but the reverse is not true.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ping?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think @sindresorhus is saying that Date.now() is accurate enough. I think that's true for this use case, no point optimizing test runs down to the nanosecond, and there is more cognitive overhead with hrtime.

That said I don't think there's much value in tracking test duration when in watch mode, especially since the number of test files run can vary greatly.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated.

That said I don't think there's much value in tracking test duration when in watch mode, especially since the number of test files run can vary greatly.

In that case, it's not very useful. When you're doing TDD or just updating a single test in general, it makes a lot more sense.

@fearphage
Copy link
Author

fearphage commented Sep 20, 2016

You can't do much about the total time unless you know the time of the individual tests.

Fair enough. I feel like can do something with the deltas though. If before PR X, tests ran in 1 minute faster, that's a big deal.

Also do you want me to add duration to the other reporters?

@novemberborn
Copy link
Member

I feel like can do something with the deltas though. If before PR X, tests ran in 1 minute faster, that's a big deal.

That's true. @avajs/core ?

I don't think there's much value in tracking test duration when in watch mode, especially since the number of test files run can vary greatly.

replaced process.hrtime() with Date.now()
return round(ms) + 'ms';
}

module.exports = duration;
Copy link
Member

@sindresorhus sindresorhus Oct 3, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This could just be pretty-ms instead.

@sindresorhus
Copy link
Member

I'm open to adding the test time to the verbose and TAP reporter, but I'm hesitant to add it to the mini-reporter. The mini-reporter is meant to be as simple as possible, and I personally don't see much value in the test time. And I agree with @novemberborn, no point in having it in watch mode.

@fearphage
Copy link
Author

no point in having it in watch mode

What about adding an option for it? It's inconvenient to have to leave watch mode just to see how long the tests are taking.

What's the point of printing the timestamp in watch mode? Is it so you can see that things are changing at a glance? If that is the sole purpose, could this replace the timestamp? It's more useful in terms of testing than showing the time.

@sindresorhus
Copy link
Member

What about adding an option for it? It's inconvenient to have to leave watch mode just to see how long the tests are taking.

@novemberborn already commented why showing total test time in watch mode is not useful.

What's the point of printing the timestamp in watch mode? Is it so you can see that things are changing at a glance? If that is the sole purpose, could this replace the timestamp? It's more useful in terms of testing than showing the time.

Yes, it's to see if something has changed, and no, total test time could not replace it for the same reason as above and the fact that the test time could be the same as previous and you would have to remember the previous test time to be sure it's different.

@fearphage
Copy link
Author

@novemberborn already commented why showing total test time in watch mode is not useful.

Unless I'm missing it, @novemberborn only said s/he didn't find it useful. S/he did not elaborate on the why. Much like novermberborn, I have an opinion too. I believe it is useful. That's the reason I added it and the reason I expressed how inconvenient it would be to have to leave watch mode to figure out how long the tests were taking to run. I'm trying to see if there's a compromise that would allow it to remain in some form or another.

the fact that the test time could be the same as previous

This is a possibility. The lack of precision in the timing adds to that factor. Hitting the same millisecond is definitely possible. The same nanosecond is less probable.

and you would have to remember the previous test time to be sure it's different.

How does this differ from printing the timestamp? By printing the time, you have to remember the time it displayed last time you saw it or know what time it is now including seconds. Neither one of them let you know the tests just ran without using your memory or external factors.

@sindresorhus
Copy link
Member

Unless I'm missing it, @novemberborn only said s/he didn't find it useful.

No, he said why it's objectively useless:

especially since the number of test files run can vary greatly.

This is a possibility. The lack of precision in the timing adds to that factor. Hitting the same millisecond is definitely possible. The same nanosecond is less probable.

This has nothing to do with precision. Even if we used nanosecond precision we would only show a prettified output to the user. Let's say a test takes 1.4243242 seconds to run the first time and 1.45435353 the second time, we would only show 1.4 second for each.

@fearphage
Copy link
Author

objectively useless

I have a use for this "objectively useless" thing.

Even if we used nanosecond precision we would only show a prettified output to the user. Let's say a test takes 1.4243242 seconds to run the first time and 1.45435353 the second time, we would only show 1.4 second for each.

I had it setup for 3 decimal places so the output in your examples would be 1.424s and 1.454s respectively I believe.

@sindresorhus
Copy link
Member

I have a use for this "objectively useless" thing.

Either you're not understanding how it's useless or you have a use-case you haven't being able to explain, but I feel this discussion is going nowhere. I would recommend focusing on what we're willing to do and not push this anymore unless you can actually come up with some convincing arguments not already exhausted ;)

I had it setup for 3 decimal places so the output in your examples would be 1.424s and 1.454s respectively I believe.

pretty-ms converts it to 1.4, which is what we want.

@fearphage
Copy link
Author

pretty-ms converts it to 1.4, which is what we want.

the fact that the test time could be the same as previous

This has nothing to do with precision.

pretty-ms in its current configuration increases the likelihood of repeated output. You cited repetitious output as a problem. You also said this has nothing to do with precision. However as you increase the precision, repeated output becomes less likely. So it directly addresses one of the problems that you cited. I don't understand why you're saying it has nothing to do with this.

Either you're not understanding how it's useless or you have a use-case you haven't being able to explain

Really?! Since I already explained the use case a few weeks ago, I guess it's the former in this false dichotomy. Are you going to elaborate on the uselessness of my use case? Help me understand how useless it really is.

@sindresorhus
Copy link
Member

pretty-ms in its current configuration increases the likelihood of repeated output. You cited repetitious output as a problem. You also said this has nothing to do with precision. However as you increase the precision, repeated output becomes less likely. So it directly addresses one of the problems that you cited. I don't understand why you're saying it has nothing to do with this.

Already said this, but I guess I can repeat myself again, we want it to be 1.4, not 1.424s. I do realize more precise output would make one of my arguments less relevant, but that's irrelevant as it's not something we want.

Really?! Since I already explained the use case a few weeks ago, I guess it's the former in this false dichotomy. Are you going to elaborate on the uselessness of my use case? Help me understand how useless it really is.

You want the total test run time so you can use the deltas to see if it became faster or slower, but watch mode only runs tests that changed or had their source change, so the delta is not constant, making it useless for this use-case, as the time will vary greatly depending on what tests/sources changed.

@fearphage
Copy link
Author

fearphage commented Oct 6, 2016

I do realize more precise output would make one of my arguments less relevant, but that's irrelevant as it's not something we want.

So it seems you do see what precision has to do with this. You prefer the opportunity for repetition more than you want precision. I understand. It seems unfair to complain about a situation that you requested.

watch mode only runs tests that changed or had their source change

That's precisely what I want. Imagine I'm writing useless-use-case.js and I'm writing a test and then writing the code to make the test pass. Then I refactor and my test time increases 30 seconds. That's a good time to revert the refactor, find the bug, or at a minimum mark the test as slow to remove it from the tdd flow. Do you understand now or is this equally useless in your objective opinion?

@novemberborn
Copy link
Member

@fearphage given how watch mode functions I'm sceptical whether it'll actually deliver the information as reliably as you're expecting it to. I'm also wary it encourages micro-optimization of test runs.

Again, we're happy to have this in regular output since large performance regressions will surface that way, but at this point we don't want this for watch mode.

@fearphage
Copy link
Author

I'm sceptical whether it'll actually deliver the information as reliably as you're expecting it to

Understandably so. I've been using it currently and it's delivering the expected results thus far.

Does ava support external reporters currently?

@sindresorhus
Copy link
Member

Does ava support external reporters currently?

$ ava --tap and you can use any TAP-reporter, but it does not apply during watch mode, neither will it ever.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants