-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Fixing the clean-slate-deployment of new VM base preview-environments #8748
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
c6d38ae
to
ce20d5c
Compare
Codecov Report
@@ Coverage Diff @@
## main #8748 +/- ##
==========================================
- Coverage 12.31% 11.17% -1.14%
==========================================
Files 20 18 -2
Lines 1161 993 -168
==========================================
- Hits 143 111 -32
+ Misses 1014 880 -134
+ Partials 4 2 -2
Flags with carried forward coverage won't be shown. Click here to find out more.
📣 Codecov can now indicate which changes are the most critical in Pull Requests. Learn more |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The changes don't look right to me. You're passing in the namespace and then also appending preview
in the -n
argument to kubectl.
The code looks right before though, so I suspect whatever issue you're looking for is hiding somewhere else
@wulfthimm I suspect the real problem is that we only invoke |
Thanks, that is what I also suspect. I started with this approach and the job did what I expected to do. Therefore this premature PR to unblock the beta release and take a deeper look afterwards. But I will take a deeper look to ensure that the solution is sustainable now. |
Having the deletion of the previous VM inside |
Maybe we're talking past each other. Your changes has introduced a bug which means If we're looking at main there's a bug even if I think the best way forward is to
|
ce20d5c
to
2543294
Compare
2543294
to
7c6b54f
Compare
.werft/vm/vm.ts
Outdated
const namespace = `preview-${options.name}` | ||
const status = exec(`kubectl --kubeconfig ${KUBECONFIG_PATH} -n ${namespace} get vmi ${options.name}`, { dontCheckRc: true, silent: true }) | ||
const status = exec(`kubectl --kubeconfig ${KUBECONFIG_PATH} -n preview-${options.name} get vmi ${options.name}`, { dontCheckRc: true, silent: true }) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe you want to revert this :)
1381c3c
to
1ae598f
Compare
Thanks for the support. I dug deeper and refactored a bit. The prepareVM function is now only used when we really want a VM. Then it checks in prepareVM if the the VM is present. If it is not present createVM otherwise the VM exists and we need to check if a clean-slate-deployment is triggered. Then the VM which existence has been verified in the previous step will be deleted and recreated. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for tackling this bug introduced by me wulf! 🤗
I feel like your approach is overcomplicating things a bit though... Could we stick with the original implementation, while adding your check following the Single Responsibility Principle?
Does this approach looks better to read?
function decideHarvesterVMCreation(werft: Werft, config: JobConfig) {
if (shouldCreateVM(config)) {
prepareVM(werft, config)
} else {
werft.currentPhaseSpan.setAttribute("werft.harvester.created_vm", false)
}
werft.done(prepareSlices.BOOT_VM)
}
function shouldCreateVM(config: JobConfig) {
return config.withVM && (
!VM.vmExists({ name: config.previewEnvironment.destname }) ||
config.cleanSlateDeployment
)
}
@ArthurSens I find my approach much easier to read because it does not use as much functions and is still relatively compact. But I am not a TS developer and do not have any preferences. |
Using conditionals is nice because it's easy to implement things when in a hurry, but they add complexity as we need to keep track of how all those conditionals correlate between different parts of the code. When reading the code, with this approach, we need to scroll up and down, we need to look at different levels of nested conditionals and it does add cognitive noise 😵 Functions, on the other hand, are a powerful thing that can reduce complexity 🙂. We can read code only once in a top-to-bottom approach, while not entering into nested conditionals. |
Signed-off-by: ArthurSens <[email protected]>
1ae598f
to
8acb739
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@wulfthimm and I just did a pair-programming session to align coding style 🙂
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me, thanks!
/hold Looks like the job failed due to core-dev flakyness? /werft run 👍 started the job as gitpod-build-wth-fix-clean-vm.44 |
/werft run 👍 started the job as gitpod-build-wth-fix-clean-vm.45 |
/unhold |
Description
This fixes a problem where VMs are not removed during a
clean-slate-deployment
.UPDATE: Thanks for the support. I dug deeper and refactored a bit. The
prepareVM
function is now only used when we really want a VM. Then it checks inprepareVM
if the the VM is present. If it is not presentcreateVM
otherwise the VM exists and we need to check if aclean-slate-deployment
is triggered. Then the VM which existence has been verified in the previous step will be deleted and recreated.Related Issue(s)
Fixes #8747
How to test
Start a workspace and run
werft run github -a with-vm=true
. After the deployment of the VM runwerft run github -a with-vm=true -a with-clean-slate-deployment
. The VM will be removed during theprepare
phase and watch out for the string "Cleaning previously created VM" in the logs. An additional test would be to runwerft run github -a with-vm=true
and make sure that the VM will not be removed.Release Notes
Documentation