Skip to content

Update visual tests for scales 1.0.0 #2842

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Aug 21, 2018
Merged

Conversation

dpseidel
Copy link
Collaborator

Closes #2824

@dpseidel dpseidel requested a review from clauswilke August 20, 2018 16:31
Copy link
Member

@hadley hadley left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They all look fine, although in scales we should look into why numeric-log.svg has changed for the worst (IMO)

@dpseidel
Copy link
Collaborator Author

Build is passing everywhere but R 3.2 strangely. Could that be a configuration problem?

Yeah, the additional breaks produced by new log_breaks() is also placing more minor break lines on the y axis, I'll open an issue so we can have a better look at it.

@clauswilke
Copy link
Member

How do we proceed? I would suggest merging these changes so we can at least locally run tests again, and then look into R 3.2 as a separate issue.

@dpseidel
Copy link
Collaborator Author

I agree I think this should be merged but travis builds will continue to fail until we sort out 3.2. From the log, 3.2 is failing 38 visual tests, many of which are not necessarily scales related.

@clauswilke
Copy link
Member

This morning, I speculated that it may not be possible to have one single reference image for all R versions on which ggplot2 is tested. However, from a quick glance at everything I can't tell what might be the cause. Unfortunately I won't be able to dig further at this time.

@hadley
Copy link
Member

hadley commented Aug 21, 2018

I think we should probably move to just running the visual tests with the released version of R

@clauswilke
Copy link
Member

@hadley Is there a simple way to make sure vdiffr runs on the currently released version, rather than on some hardcoded version (e.g. "3.5")? The latter would require constant manual updating.

@dpseidel Alternatively, you could add something like the following:

if (paste(R.Version()$major, R.Version()$minor, sep = ".") < "3.3.0") {
  enable_vdiffr <- FALSE
}

at this location:

In this way, vdiffr won't run on R versions that are too old, and we could slowly increase the minimum required version as problems develop with older versions.

@dpseidel
Copy link
Collaborator Author

Sure, that's simple - I will add that in now. What's curious to me is that 3.1 is passing while 3.2 is failing but looking over the the travis logs I can't tell exactly what might be the problem.

@clauswilke
Copy link
Member

As far as I can tell, for 3.1 it's been skipping the visual tests already because no sufficiently recent vdiffr is available.

@clauswilke
Copy link
Member

Yay, success!

@dpseidel
Copy link
Collaborator Author

Ahh that makes sense! Looks like we are passing now. I'll merge this and if we eventually decide to run vdiffr on just the released version of R, we can make that change in a separate PR. Thanks @clauswilke for your help!

@dpseidel dpseidel merged commit 4a14027 into tidyverse:master Aug 21, 2018
@hadley
Copy link
Member

hadley commented Aug 21, 2018

@clauswilke the easiest thing would be set a env var just for r: release using the travis build matrix (like the approach I use in vctrs to only build the website and do code coverage for R-release)

@clauswilke
Copy link
Member

@hadley I see. The advantage of the env var is also that it won't be set on CRAN and hence we won't have sudden failure of tests because some package changes. The disadvantage is going to be that we'll have to set the same on our local machines for manual testing.

Broadly related question: Is there any way to get output printed to the terminal during the testing process? It would be really helpful if we could see in the log whether enable_vdiffr was TRUE or FALSE in a given run, so we can verify the tests are actually run in the cases where we think they are.

@lionel-
Copy link
Member

lionel- commented Aug 22, 2018

The disadvantage is going to be that we'll have to set the same on our local machines for manual testing.

I think the visual tests should be run when:

  • A specific envvar is set
  • NOT_CRAN is set

This way they are always run locally if you use devtools.

@clauswilke
Copy link
Member

@lionel- Just to make sure we're on the same page: You are saying the visual tests should be run if (a specific envvar is set) OR (NOT_CRAN is set). Either of these conditions alone will trigger the tests. Correct?

@lionel-
Copy link
Member

lionel- commented Aug 22, 2018

Yes, sorry for the imprecise bullet list ;)

@lock
Copy link

lock bot commented Mar 7, 2019

This old issue has been automatically locked. If you believe you have found a related problem, please file a new issue (with reprex) and link to this issue. https://reprex.tidyverse.org/

@lock lock bot locked and limited conversation to collaborators Mar 7, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants