-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Prebuilds show wrong Pending status while pulling & building image(s) #11248
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
@szab100 I agree that the status and message we display is confusing.
I think that's fine, because image builds and prebuilds are foundamentally decoupled, and images might be reused my n prebuilds, so not taking that time into account IMO makes sense. Technically, PENDING also make sense from the perspective of the prebuild, because it's waiting for the image build to finish. But we definitely need to update the misleading text message. @szab100 Would that help already? 🤔 |
Hi @geropl, thanks for sharing your view. The way I see is that the entire build process that was triggered by a VCS webhook should be considered as a Prebuild, incl. building the image, running 'before' & 'init' commands, etc. So IMO, users should be able to clearly see and understand how much (total and in each individual phases) time was spent during prebuilds and workspace starts - incl. image building from scratch + workspace image pulling [X] OR pre-building (incl. img building) [Y] + loading the prebuild [Z] ===> Y + Z should be significantly lower than X. With proper timing stats, users (and admins) can work on further optimizing workspace startup times, since they are highly dependent on the characteristics of both the Dockerfile and the before / init / command tasks or even the used IDE has effect on prebuids and workspace startup times. So, it would be nice to see the lifecycle changes broken down, as well as these stats to be pushed to monitoring / analytics. BTW, we experienced a seemingly buggy behavior with prebuilds on our self-hosted today: prebuilds stayed in PENDING state (3 out of 3 we triggered sequentially). However, their imagebuild-xxx pods actually ran in the namespace and finished successfully based on the pod logs, incl. pushing the baked image. But since the prebuilds' statuses never turned to READY, workspace starts did not use prebuilds. Later these ghost prebuilds turned to CANCELED. |
Could not agree more. 🙃 All of this information is valuable to users, and should definitely be displayed. I think it's worth having a separate feature request for that. 💯
Next weeks self-hosted release will come with a bunch of improvements to the prebuilds/image builds process, which I think should fix this behavior. Note that we recently introduced "auto-cancel" of running prebuilds (if they happen to be on the same branch), which might or might not be related to your observation: |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Bug description
Prebuilds show "Pending" status while they are already building the workspace's Dockerfile (basically the docker part), and the output is already being streamed to the UI as well. This phase - depending on the complexity of the used dockerfile & structure of base imgs - can last quite long and apart from the wrong UI messaging, the major issue is that due to this time spend in Pending status, Gitpod does not count it towards the time spent (and saved) during the prebuild.
Example:
Steps to reproduce
Just use some more complex dockerfile for the workspace, even better if it is using a base that has not been loaded on the node before.
Workspace affected
No response
Expected behavior
See the "Running" status while building the image, or some new differentiated status, like "Initializing" (elapsed time in this status is still counted into prebuild times).
Example repository
No response
Anything else?
No response
The text was updated successfully, but these errors were encountered: