-
Notifications
You must be signed in to change notification settings - Fork 614
dev_container #2613
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Yes with free/eval Codespaces the disk space Is limited but the main issues are: And https://discuss.tensorflow.org/t/adopting-open-source-dockerfiles-for-official-tf-nightly-ci/6050/4 /cc @seanpmorgan |
IMO, codespace container should use a different image instead of devops image. |
Why we need multipython in codespace? |
The point is to have the same developer container as the one we are using in the CI so that we are almost on the same page when we develop TF Addons and when we automatically validate it with the CI without having too much risks to be out of sync between the two envs as It Is seems that this type of drift happens quite often, soon or later, when you have two independent envs. But now we don't have any CPU image anymore also in the new TF Docker refactoring effort.
|
Could we specific different |
We could separate latest-cpu and latest-gpu docker images. If you see the image type was already an arg controlled by addons/.devcontainer/Dockerfile Line 1 in 41eaa27
The problem is that the image on our (Addons) DockerHUB registry is de-facto a GPU one after #2598 (comment) was merged. I've prepared an upstream PR to start to separate baseline (CPU) and CUDA layers: We still need to work with comments in the same single See more at: |
TF doesn't want to accept the contribution of an intermediate CPU target based with a small refactoring of their own new receipt tensorflow/build#47 (comment). So when we are going to merge @seanpmorgan (and mine) #2515 we still have all the CUDA layer overhead. I will accept any suggestion but on my side I don't want to maintain multiple Dockerfile diverging receipts between the Addons devel env and the Addons CI. |
/cc @yarri-oss |
Discussed this in the grooming meeting. It's certainly something we want supported for Addons, but we're not willing to build our own containers given that custom-op image is no longer supported. Lets bring this up at the next SIG build meeting to see if we can get any traction. |
We discussed this in today meeting SIG BUILD meeting but It seems that tensorflow/build#47 (comment) review could not go ahead. |
TensorFlow Addons is transitioning to a minimal maintenance and release mode. New features will not be added to this repository. For more information, please see our public messaging on this decision: Please consider sending feature requests / contributions to other repositories in the TF community with a similar charters to TFA: |
System information
no
Describe the bug
Why tfaddons/dev_container:latest-cpu is so big(6.3GB) and has some CUDA layers and lots of apt-update layers which increase image size.
A clear and concise description of what the bug is.
Code to reproduce the issue
Create a codespace.
Other info / logs
https://hub.docker.com/layers/tfaddons/dev_container/latest-cpu/images/sha256-e97c0a51c9da13134b9e4f2a27aeee662def8e77ced84224f4dcd90e00cc18d3?context=explore
Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.
The text was updated successfully, but these errors were encountered: