-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Update preview VM image #11053
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update preview VM image #11053
Conversation
/hold Putting a hold until both teams tested properly that this works 😬 |
@ArthurSens Weird, I am not seeing a Results tab, with a link to the preview environment here. 🤔 |
started the job as gitpod-build-as-updae-vm.1 because the annotations in the pull request description changed |
@kylos101 you should get a new one by ticking/unticking the |
@kylos101 @meysholdt are we confident on merging this one? |
@kylos101 @ArthurSens The cgroup filesystem in the preview environment is still v1. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ArthurSens given the feedback from @Furisto , something is not right.
@Furisto how are you checking that the file system is still on cgroup v1?
@kylos101 @Furisto, I'm not sure how to proceed here. Do you have a way to certify that a VM image belongs to a certain github release? How do we make sure we have the correct image in place? And regarding group v2, do you configure something to make it available in production? Am I missing something here? |
This is how we activate cgroup v2 in prod: https://github.com/gitpod-io/gitpod-packer-gcp-image/blob/a76de6e8bd479e4f441d5e86affc642abf15c246/setup.sh#L115 |
@ArthurSens do you need anything else to test this? @jenting @utam0k do you have any other recommendations for @ArthurSens ? I ask because I know you wrote this internal doc last night. |
The internal doc written by toru does help a little bit, but the need for a reboot makes things very complicated to add into our CI at the moment... I'll try to take another look at this tomorrow 🤔 |
Thank you, Arthur. It would be good to support reboot because we are thinking that having a werft annotation to make us able to switch from cgroup v1 or cgroup v2. |
@ArthurSens Can we at least decide when we first create the preview-env? |
Does "having a werft annotation to make us able to switch from cgroup v1 or cgroup v2" really require support for reboot? Assumtion: What we want to have here is the ability to pass Linux kernel boot parameters from Werft to the VM as part of the VM-creation-process. |
👋 let's try to keep this like a 🛹 My recommendation is:
If following the document to test on cgroup v1 happens often, and it'd save people time in the future, then we can consider the werft annotation in a future PR and issue. In other words, right now, its too early to say whether we truly need a werft annotation to switch between types of cgroup for a preview environment instance. It'd be better (best?) to save that energy for the Platform Team, so they can work other things. |
The problem here is that looks like a reboot is needed mid-CI to enable cgroupv2, and that is quite complex to implement. We depend on the VM being up for several steps of the CI. I'm not saying it is impossible, but will require a fair amount of refactoring 🙃 Is there any other way of enabling chroupv2 that won't require a reboot? |
AFAI, there is no way without reboot because cgroup is a core feature of linux kernel. All processes belong to some cgroup. It is probably impossible to switch this without a restart.
|
@ArthurSens 👋 I'm not sure why the reboot is necessary if you're using our updated image. Is Harvester mutating this grub entry? My understanding based on this comment is that on boot-up, we'd be using cgroup v2. Can you share the output of |
/werft with-preview=true 👎 unknown command: with-preview=true |
/werft with-preview 👎 unknown command: with-preview |
/werft run with-preview=true 👍 started the job as gitpod-build-as-updae-vm.8 |
@meysholdt do you have the same trouble when using the latest image? That would be this one. |
@meysholdt besides @kylos101 comment, please make sure k3s is started with the flag |
kubectl apply -f /var/lib/gitpod/manifests/csi-driver.yaml | ||
kubectl apply -f /var/lib/gitpod/manifests/csi-config.yaml |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These paths got changed in https://github.com/gitpod-io/gitpod-packer-gcp-image/pull/114
With the latest image and a fresh VM now the preview env succeeds: https://werft.gitpod-dev.com/job/gitpod-custom-as-updae-vm.11
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you, @vulkoingim!
I can confirm that the preview env on this PR runs on image default/gitpod-k3s-202207120820.qcow2
and that I could successfully start workspace inside that preview env.
/werft run with-preview=true 👍 started the job as gitpod-build-as-updae-vm.19 |
@jenting this job won't run with the changes, it has to be started manually from the CLI. I'll do it in a second and send a link, after I clean the current VM. |
Thank u |
I could open the workspace, and the underlying is |
/unhold |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Signed-off-by: ArthurSens [email protected]
Description
Updates the VM image to
gitpod-k3s-202206291903
Related Issue(s)
Fixes #10832
How to test
Release Notes
Documentation
Werft options: