Skip to content

Replace conda update / conda cache in cirrus with conda-lock #4105

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
jamesp opened this issue Apr 21, 2021 · 7 comments · Fixed by #4108
Closed

Replace conda update / conda cache in cirrus with conda-lock #4105

jamesp opened this issue Apr 21, 2021 · 7 comments · Fixed by #4108

Comments

@jamesp
Copy link
Member

jamesp commented Apr 21, 2021

📰 Custom Issue

Current cirrus ci resolves a new environment every run. Due to computational constraints, sometimes this can take more than an hour and force the job to timeout when running the command conda env update --prefix=/tmp/cirrus-ci-build/.nox/tests --file=requirements/ci/py36.yml --prune. Proposed alternative:

  1. Create environment:
    a. if environment-py36.lock doesn't exist, resolve an environment with conda env create. use conda-lock to create an explicit lock file environment-py36.yml.
    b. if environment-py36.lock does exist, use conda create -f environment-py36.yml to avoid resolving a new env.
  2. Run tests.
  3. Only cache environment-py36.yml with our current cache invalidation logic (weekly I think?) rather than the whole conda directory.

This would mean reproducible environments, but not committing an explicit environment to our repo and avoid long resolve and update calls in the CI.

Alternatively, we could resolve and check-in lock files to our repository. This is standard practice in the javascript world, see https://stackoverflow.com/questions/44206782/do-i-commit-the-package-lock-json-file-created-by-npm-5 for some discussions of the pros and cons of doing so.

@jamesp
Copy link
Member Author

jamesp commented Apr 21, 2021

I've used environment-py36.yml as an example, I'm sure you can all imagine how that might scale to the matrix of python envs!

@jamesp
Copy link
Member Author

jamesp commented Apr 26, 2021

Alternatively, if we trust the cache we could just remove the conda env update command and use the last known environment.

Thinking on it a bit over the last few days, I wonder if we should be considering committing lock files to our ci folder, and having a process as part of the release cycle to update them.

@trexfeathers
Copy link
Contributor

I wonder if we should be considering committing lock files to our ci folder, and having a process as part of the release cycle to update them

I support this >100%! This isn't currently publicly visible but environment management is core to benchmarking Iris, and waiting for Conda is an equal pain there. So it would be great if the env spec was available outside anything Cirrus-specific.

@rcomer
Copy link
Member

rcomer commented Apr 26, 2021

I've probably misunderstood, but does that mean we wouldn't test against new releases of dependencies until we're about to cut an Iris release?

@trexfeathers
Copy link
Contributor

I've probably misunderstood, but does that mean we wouldn't test against new releases of dependencies until we're about to cut an Iris release?

I may have misunderstood myself, but I had thought the dependencies would periodically be refreshed (weekly?). The benefit being that work on a PR wouldn't be held up by this.

@rcomer
Copy link
Member

rcomer commented Apr 26, 2021

Separating failures due to dependency updates from Iris PRs sounds like a win to me!

Incidentally I noticed recently that xarray are also doing regular testing against development versions of dependencies. So they get earlier warnings if something is going to break. See e.g. pydata/xarray#5077

@jamesp
Copy link
Member Author

jamesp commented Apr 26, 2021

Thanks @rcomer, @trexfeathers, I'll put together a POC

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants