Skip to content

Conversation

djoshy
Copy link
Contributor

@djoshy djoshy commented Aug 1, 2025

- What I did
This adds a new kubernetes CronJob manifest in the MCO's install folder to delete the unused v1alpha1 MCN CRD. This job has a run level of 0000_80_machine-config_00; meaning that it would be deployed before the CVO applies the v1 MCN CRD whose run level is 0000_80_machine-config_01. While this delete may not happened instantly, once completed, the CVO will be able to successfully apply the v1 CRD. I also moved up the rbac manifest to the same run level so that the cronjob has the required permissions on its first try.

- How to verify it

  • Launch a cluster on 4.15. (or 4.19 if you're feeling lazy)
  • Apply the v1alpha1 MCN CRD from release-4.15.
  • (Chain) Upgrade the cluster to a 4.20 OCP image with this PR.
  • Observe the cluster-version-operator logs in the openshift-cluster-version namespace. When the CVO finally gets to the MCO's install/ manifests(~780 mark):
I0806 20:42:48.328562       1 batch.go:55] CronJob openshift-machine-config-operator/machine-config-nodes-crd-cleanup not found, creating
I0806 20:42:48.394598       1 sync_worker.go:1056] Done syncing for cronjob "openshift-machine-config-operator/machine-config-nodes-crd-cleanup" (821 of 975)

...
I0806 20:43:51.937103       1 apiext.go:19] CRD machineconfignodes.machineconfiguration.openshift.io not found, creating
E0806 20:43:52.020275       1 task.go:128] "Unhandled Error" err="error running apply for customresourcedefinition \"machineconfignodes.machineconfiguration.openshift.io\" (826 of 975): CustomResourceDefinition machineconfignodes.machineconfiguration.openshift.io does not declare an Established status condition: []" logger="UnhandledError"
I0806 20:44:02.030496       1 sync_worker.go:1056] Done syncing for customresourcedefinition "machineconfignodes.machineconfiguration.openshift.io" (826 of 975)

The last line is the MCN sync, which may fail initially while the job is still running. Examine cronjobs via:

$ oc get cronjob
NAME                               SCHEDULE    TIMEZONE   SUSPEND   ACTIVE   LAST SCHEDULE   AGE
machine-config-nodes-crd-cleanup   * * * * *   <none>     True      0        35s             37s

This should've spawned a job:

$ oc get job
NAME                                        STATUS     COMPLETIONS   DURATION   AGE
machine-config-nodes-crd-cleanup-29241788   Complete   1/1           11s        3m48s

Running an oc get pods | grep cleanup should get you the pod name,

$ oc get pod | grep cleanup
machine-config-nodes-crd-cleanup-29241788-zpmf5           0/1     Completed   0             4m45s

which can be examined to observe the logs:

$ oc logs -f machine-config-nodes-crd-cleanup-29241788-zpmf5
Checking for MachineConfigNodes CRD with v1alpha1 version...
Found CRD machineconfignodes.machineconfiguration.openshift.io with v1alpha1 version, deleting it...
customresourcedefinition.apiextensions.k8s.io "machineconfignodes.machineconfiguration.openshift.io" deleted
Successfully deleted CRD machineconfignodes.machineconfiguration.openshift.io
CRD cleanup completed successfully
Suspending cronjob...
cronjob.batch/machine-config-nodes-crd-cleanup patched

This took about a minute for me(GCP cluster); the CVO sync should now progress to roll out the 4.20 MCO pods. The new operator will now begin to update the control-plane and worker nodes. It's possible that the job logs may get lost as the cluster updates; but the CR itself should persist.

On an install, this cronjob should be a no-op; you should see pod logs like:

$ oc logs -f machine-config-nodes-crd-cleanup-29241788-zpmf5
Checking for MachineConfigNodes CRD with v1alpha1 version...
CRD machineconfignodes.machineconfiguration.openshift.io does not have v1alpha1 version, nothing to clean up
Suspending cronjob...
cronjob.batch/machine-config-nodes-crd-cleanup patched

- Other notes

I chose a CronJob for two reasons:

  • The CVO will block on jobs defined via manifests; this means that none of the required MCO components after it(or any other components at the same run level) would get made. This is fine during upgrade mode but during installs this can cause a problem due to the task graph flattening.
  • In addition, the CVO also splits the manifest graph on job manifests, meaning every manifest in task nodes defined before it needs to successfully applied. Again, this would not be an issue during upgrades as the order of task nodes can be reasoned about, but in install mode, this could block any number of manifests.

@openshift-ci-robot openshift-ci-robot added jira/severity-critical Referenced Jira bug's severity is critical for the branch this PR is targeting. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. labels Aug 1, 2025
@openshift-ci-robot
Copy link
Contributor

@djoshy: This pull request references Jira Issue OCPBUGS-59723, which is valid. The bug has been moved to the POST state.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (4.20.0) matches configured target version for branch (4.20.0)
  • bug is in the state New, which is one of the valid states (NEW, ASSIGNED, POST)

Requesting review from QA contact:
/cc @sergiordlr

The bug has been updated to refer to the pull request using the external bug tracker.

In response to this:

- What I did
This adds a new kubernetes job manifest in the MCO's install folder to delete the unused v1alpha1 MCN CRD. This job has a run level of 0000_80_machine-config_00; meaning that it would be deployed before the CVO applies the v1 MCN CRD whose run level is 0000_80_machine-config_01. While this delete may not happened instantly, once completed, the CVO will be able to successfully apply the v1 CRD.

- How to verify it

  • Launch a cluster on 4.15. (or 4.19 if you're feeling lazy)
  • Apply the v1alpha1 MCN CRD from release-4.15.
  • (chain) Upgrade the cluster to a 4.20 OCP image with this PR.
  • Observe the cluster-version-operator logs in the openshift-cluster-version namespace. When the CVO finally gets to the MCO's install/ manifests(~780 mark):
I0801 18:39:04.957194       1 sync_worker.go:1041] Running sync for job "openshift-machine-config-operator/machine-config-nodes-crd-cleanup" (777 of 975)
...
I0801 18:39:06.408255       1 sync_worker.go:1041] Running sync for customresourcedefinition "machineconfignodes.machineconfiguration.openshift.io" (802 of 975)

The last line is the MCN sync, which may fail initially while the job is still running. Examine jobs via:

$ oc get jobs -n openshift-machine-config-operator
NAME                               STATUS     COMPLETIONS   DURATION   AGE
machine-config-nodes-crd-cleanup   Complete   1/1           20s        40m

Running an oc describe on this job will also get you the pod name, which can be examined to observe the logs:

$ oc logs -f machine-config-nodes-crd-cleanup-lqkv8
Checking for MachineConfigNodes CRD with v1alpha1 version...
Found CRD machineconfignodes.machineconfiguration.openshift.io with v1alpha1 version, deleting it...
customresourcedefinition.apiextensions.k8s.io "machineconfignodes.machineconfiguration.openshift.io" deleted
Successfully deleted CRD machineconfignodes.machineconfiguration.openshift.io
CRD cleanup completed successfully

This took about a minute for me(GCP cluster); the CVO sync should now progress to roll out the 4.20 MCO pods. The new operator will now begin to update the control-plane and worker nodes. It's possible that the job logs may get lost as the cluster updates; but the CR itself should persist.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Aug 1, 2025
Copy link
Member

@isabella-janssen isabella-janssen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks great to me, thanks @djoshy. Since I cannot answer your note about the image, I'll leave final review & tagging to a more senior member of the team than I.

Copy link
Contributor

@yuqi-zhang yuqi-zhang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

overall makes sense to me, just some followup questions inline

@djoshy
Copy link
Contributor Author

djoshy commented Aug 2, 2025

/retest

@djoshy djoshy force-pushed the delete-old-mcn-crd branch from 1bcdfd1 to 10718d1 Compare August 4, 2025 15:17
@djoshy djoshy marked this pull request as draft August 5, 2025 10:09
@openshift-ci openshift-ci bot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Aug 5, 2025
@djoshy djoshy force-pushed the delete-old-mcn-crd branch 2 times, most recently from d679071 to 61bce7e Compare August 5, 2025 17:17
@djoshy
Copy link
Contributor Author

djoshy commented Aug 5, 2025

/test e2e-gcp-op

@djoshy djoshy force-pushed the delete-old-mcn-crd branch 3 times, most recently from 43bc880 to a9b20dd Compare August 6, 2025 17:21
@openshift-ci-robot
Copy link
Contributor

@djoshy: This pull request references Jira Issue OCPBUGS-59723, which is valid.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (4.20.0) matches configured target version for branch (4.20.0)
  • bug is in the state POST, which is one of the valid states (NEW, ASSIGNED, POST)

Requesting review from QA contact:
/cc @sergiordlr

In response to this:

- What I did
This adds a new kubernetes CronJob manifest in the MCO's install folder to delete the unused v1alpha1 MCN CRD. This job has a run level of 0000_80_machine-config_00; meaning that it would be deployed before the CVO applies the v1 MCN CRD whose run level is 0000_80_machine-config_01. While this delete may not happened instantly, once completed, the CVO will be able to successfully apply the v1 CRD. I also moved up the rbac manifest to the same run level so that the cronjob has the required permissions on its first try.

- How to verify it

  • Launch a cluster on 4.15. (or 4.19 if you're feeling lazy)
  • Apply the v1alpha1 MCN CRD from release-4.15.
  • (chain) Upgrade the cluster to a 4.20 OCP image with this PR.
  • Observe the cluster-version-operator logs in the openshift-cluster-version namespace. When the CVO finally gets to the MCO's install/ manifests(~780 mark):
I0801 18:39:04.957194       1 sync_worker.go:1041] Running sync for job "openshift-machine-config-operator/machine-config-nodes-crd-cleanup" (777 of 975)
...
I0801 18:39:06.408255       1 sync_worker.go:1041] Running sync for customresourcedefinition "machineconfignodes.machineconfiguration.openshift.io" (802 of 975)

The last line is the MCN sync, which may fail initially while the job is still running. Examine cronjobs via:

$ oc get cronjob
NAME                               SCHEDULE    TIMEZONE   SUSPEND   ACTIVE   LAST SCHEDULE   AGE
machine-config-nodes-crd-cleanup   * * * * *   <none>     True      0        35s             37s

This should've spawned a job:

$ oc get job
NAME                                        STATUS     COMPLETIONS   DURATION   AGE
machine-config-nodes-crd-cleanup-29241788   Complete   1/1           11s        3m48s

Running an oc get pods | grep cleanup should get you the pod name,

$ oc get pod | grep cleanup
machine-config-nodes-crd-cleanup-29241788-zpmf5           0/1     Completed   0             4m45s

which can be examined to observe the logs:

$ oc logs -f machine-config-nodes-crd-cleanup-29241788-zpmf5
Checking for MachineConfigNodes CRD with v1alpha1 version...
Found CRD machineconfignodes.machineconfiguration.openshift.io with v1alpha1 version, deleting it...
customresourcedefinition.apiextensions.k8s.io "machineconfignodes.machineconfiguration.openshift.io" deleted
Successfully deleted CRD machineconfignodes.machineconfiguration.openshift.io
CRD cleanup completed successfully

This took about a minute for me(GCP cluster); the CVO sync should now progress to roll out the 4.20 MCO pods. The new operator will now begin to update the control-plane and worker nodes. It's possible that the job logs may get lost as the cluster updates; but the CR itself should persist.

On an install, this cronjob should be a no-op; you should see pod logs like:

$ oc logs -f machine-config-nodes-crd-cleanup-29241788-zpmf5
Checking for MachineConfigNodes CRD with v1alpha1 version...
CRD machineconfignodes.machineconfiguration.openshift.io does not have v1alpha1 version, nothing to clean up
Suspending cronjob...
cronjob.batch/machine-config-nodes-crd-cleanup patched

- Other notes

I chose a CronJob for two reasons:

  • The CVO will block on jobs defined via manifests; this means that none of the required MCO components after it(or any other components at the same run level) would get made. This is fine during upgrade mode but during installs this can cause a problem due to the task graph flattening.
  • In addition, the CVO also splits the manifest graph on job manifests, meaning every manifest in task nodes defined before it needs to successfully applied. Again, this would not be an issue during upgrades as the order of task nodes can be reasoned about, but in install mode, this could block any number of manifests.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@djoshy djoshy force-pushed the delete-old-mcn-crd branch from a9b20dd to 682dbeb Compare August 6, 2025 19:28
@djoshy djoshy marked this pull request as ready for review August 6, 2025 19:29
@openshift-ci openshift-ci bot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Aug 6, 2025
@openshift-ci openshift-ci bot requested review from umohnani8 and yuqi-zhang August 6, 2025 19:30
Copy link
Contributor

@yuqi-zhang yuqi-zhang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall the cronjob path makes sense to me, thanks for investigating thoroughly! Some last questions inline

restartPolicy: OnFailure
containers:
- name: crd-cleanup
image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:48011539e9051f642e33e7aaa4617b93387e0d441f84d09d41f8f7966668bb04
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm, should this be hardcoded? I thought you wanted to reference the RHCOS image of the corresponding payload

Or if we want to remove this in 4.21+ maybe that's fine as well to just use a singular

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh d'oh - I meant to change this back, was doing some manual testing and it must have snuck in. Although I guess your argument is fair, using the placeholder just makes it a tad more readable.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should be good now 😄

# Run every minute initially to trigger an immediate run
schedule: "* * * * *"
# Don't suspend initially - let it run once
suspend: false
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does this re-set every upgrade then? Since we patch it in the actual bash script, I thought the CVO would try to rectify back to this template?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, I think the create-only annotation should prevent updates to this resource.

@djoshy
Copy link
Contributor Author

djoshy commented Aug 7, 2025

/payload 4.20 blocking

just to be safe

Copy link
Contributor

openshift-ci bot commented Aug 7, 2025

@djoshy: it appears that you have attempted to use some version of the payload command, but your comment was incorrectly formatted and cannot be acted upon. See the docs for usage info.

@djoshy
Copy link
Contributor Author

djoshy commented Aug 7, 2025

/payload 4.20 nightly blocking

Copy link
Contributor

openshift-ci bot commented Aug 7, 2025

@djoshy: trigger 10 job(s) of type blocking for the nightly release of OCP 4.20

  • periodic-ci-openshift-release-master-ci-4.20-e2e-aws-upgrade-ovn-single-node
  • periodic-ci-openshift-release-master-nightly-4.20-e2e-aws-ovn-upgrade-fips
  • periodic-ci-openshift-release-master-ci-4.20-e2e-azure-ovn-upgrade
  • periodic-ci-openshift-release-master-ci-4.20-upgrade-from-stable-4.19-e2e-gcp-ovn-rt-upgrade
  • periodic-ci-openshift-hypershift-release-4.20-periodics-e2e-aws-ovn-conformance
  • periodic-ci-openshift-release-master-nightly-4.20-e2e-aws-ovn-serial
  • periodic-ci-openshift-release-master-ci-4.20-e2e-aws-ovn-techpreview
  • periodic-ci-openshift-release-master-ci-4.20-e2e-aws-ovn-techpreview-serial
  • periodic-ci-openshift-release-master-nightly-4.20-e2e-metal-ipi-ovn-bm
  • periodic-ci-openshift-release-master-nightly-4.20-e2e-metal-ipi-ovn-ipv6

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/69a756c0-73b3-11f0-8238-34e6750a6c1d-0

@djoshy
Copy link
Contributor Author

djoshy commented Aug 7, 2025

/test unit

Copy link
Contributor

openshift-ci bot commented Aug 7, 2025

@djoshy: it appears that you have attempted to use some version of the payload command, but your comment was incorrectly formatted and cannot be acted upon. See the docs for usage info.

1 similar comment
Copy link
Contributor

openshift-ci bot commented Aug 7, 2025

@djoshy: it appears that you have attempted to use some version of the payload command, but your comment was incorrectly formatted and cannot be acted upon. See the docs for usage info.

@djoshy
Copy link
Contributor Author

djoshy commented Aug 7, 2025

/cherry-pick release-4.19

@openshift-cherrypick-robot

@djoshy: once the present PR merges, I will cherry-pick it on top of release-4.19 in a new PR and assign it to you.

In response to this:

/cherry-pick release-4.19

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@djoshy
Copy link
Contributor Author

djoshy commented Aug 8, 2025

/test all

@djoshy
Copy link
Contributor Author

djoshy commented Aug 8, 2025

/retest-required

1 similar comment
@djoshy
Copy link
Contributor Author

djoshy commented Aug 8, 2025

/retest-required

@yuqi-zhang
Copy link
Contributor

/lgtm

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Aug 8, 2025
Copy link
Contributor

openshift-ci bot commented Aug 8, 2025

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: djoshy, yuqi-zhang

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@djoshy
Copy link
Contributor Author

djoshy commented Aug 8, 2025

/override ci/prow/e2e-gcp-op-1of2
/override ci/prow/e2e-gcp-op-2of2
/override ci/prow/e2e-gcp-op-single-node

Overriding GCP tests, this PR has passed them before on this commit and these jobs are failing to launch due to some infra issues

Copy link
Contributor

openshift-ci bot commented Aug 8, 2025

@djoshy: Overrode contexts on behalf of djoshy: ci/prow/e2e-gcp-op-1of2, ci/prow/e2e-gcp-op-2of2, ci/prow/e2e-gcp-op-single-node

In response to this:

/override ci/prow/e2e-gcp-op-1of2
/override ci/prow/e2e-gcp-op-2of2
/override ci/prow/e2e-gcp-op-single-node

Overriding GCP tests, this PR has passed them before and these jobs are failing to launch due to some infra issues

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@openshift-merge-bot openshift-merge-bot bot merged commit 8f6c60c into openshift:main Aug 8, 2025
16 of 24 checks passed
@openshift-ci-robot
Copy link
Contributor

@djoshy: Jira Issue OCPBUGS-59723: All pull requests linked via external trackers have merged:

Jira Issue OCPBUGS-59723 has been moved to the MODIFIED state.

In response to this:

- What I did
This adds a new kubernetes CronJob manifest in the MCO's install folder to delete the unused v1alpha1 MCN CRD. This job has a run level of 0000_80_machine-config_00; meaning that it would be deployed before the CVO applies the v1 MCN CRD whose run level is 0000_80_machine-config_01. While this delete may not happened instantly, once completed, the CVO will be able to successfully apply the v1 CRD. I also moved up the rbac manifest to the same run level so that the cronjob has the required permissions on its first try.

- How to verify it

  • Launch a cluster on 4.15. (or 4.19 if you're feeling lazy)
  • Apply the v1alpha1 MCN CRD from release-4.15.
  • (Chain) Upgrade the cluster to a 4.20 OCP image with this PR.
  • Observe the cluster-version-operator logs in the openshift-cluster-version namespace. When the CVO finally gets to the MCO's install/ manifests(~780 mark):
I0806 20:42:48.328562       1 batch.go:55] CronJob openshift-machine-config-operator/machine-config-nodes-crd-cleanup not found, creating
I0806 20:42:48.394598       1 sync_worker.go:1056] Done syncing for cronjob "openshift-machine-config-operator/machine-config-nodes-crd-cleanup" (821 of 975)

...
I0806 20:43:51.937103       1 apiext.go:19] CRD machineconfignodes.machineconfiguration.openshift.io not found, creating
E0806 20:43:52.020275       1 task.go:128] "Unhandled Error" err="error running apply for customresourcedefinition \"machineconfignodes.machineconfiguration.openshift.io\" (826 of 975): CustomResourceDefinition machineconfignodes.machineconfiguration.openshift.io does not declare an Established status condition: []" logger="UnhandledError"
I0806 20:44:02.030496       1 sync_worker.go:1056] Done syncing for customresourcedefinition "machineconfignodes.machineconfiguration.openshift.io" (826 of 975)

The last line is the MCN sync, which may fail initially while the job is still running. Examine cronjobs via:

$ oc get cronjob
NAME                               SCHEDULE    TIMEZONE   SUSPEND   ACTIVE   LAST SCHEDULE   AGE
machine-config-nodes-crd-cleanup   * * * * *   <none>     True      0        35s             37s

This should've spawned a job:

$ oc get job
NAME                                        STATUS     COMPLETIONS   DURATION   AGE
machine-config-nodes-crd-cleanup-29241788   Complete   1/1           11s        3m48s

Running an oc get pods | grep cleanup should get you the pod name,

$ oc get pod | grep cleanup
machine-config-nodes-crd-cleanup-29241788-zpmf5           0/1     Completed   0             4m45s

which can be examined to observe the logs:

$ oc logs -f machine-config-nodes-crd-cleanup-29241788-zpmf5
Checking for MachineConfigNodes CRD with v1alpha1 version...
Found CRD machineconfignodes.machineconfiguration.openshift.io with v1alpha1 version, deleting it...
customresourcedefinition.apiextensions.k8s.io "machineconfignodes.machineconfiguration.openshift.io" deleted
Successfully deleted CRD machineconfignodes.machineconfiguration.openshift.io
CRD cleanup completed successfully
Suspending cronjob...
cronjob.batch/machine-config-nodes-crd-cleanup patched

This took about a minute for me(GCP cluster); the CVO sync should now progress to roll out the 4.20 MCO pods. The new operator will now begin to update the control-plane and worker nodes. It's possible that the job logs may get lost as the cluster updates; but the CR itself should persist.

On an install, this cronjob should be a no-op; you should see pod logs like:

$ oc logs -f machine-config-nodes-crd-cleanup-29241788-zpmf5
Checking for MachineConfigNodes CRD with v1alpha1 version...
CRD machineconfignodes.machineconfiguration.openshift.io does not have v1alpha1 version, nothing to clean up
Suspending cronjob...
cronjob.batch/machine-config-nodes-crd-cleanup patched

- Other notes

I chose a CronJob for two reasons:

  • The CVO will block on jobs defined via manifests; this means that none of the required MCO components after it(or any other components at the same run level) would get made. This is fine during upgrade mode but during installs this can cause a problem due to the task graph flattening.
  • In addition, the CVO also splits the manifest graph on job manifests, meaning every manifest in task nodes defined before it needs to successfully applied. Again, this would not be an issue during upgrades as the order of task nodes can be reasoned about, but in install mode, this could block any number of manifests.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

Copy link
Contributor

openshift-ci bot commented Aug 8, 2025

@djoshy: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/e2e-gcp-mco-disruptive 34d00bb link false /test e2e-gcp-mco-disruptive
ci/prow/e2e-aws-ovn-windows 34d00bb link false /test e2e-aws-ovn-windows
ci/prow/okd-scos-e2e-aws-ovn 34d00bb link false /test okd-scos-e2e-aws-ovn
ci/prow/e2e-aws-mco-disruptive 34d00bb link false /test e2e-aws-mco-disruptive
ci/prow/e2e-gcp-op-ocl 34d00bb link false /test e2e-gcp-op-ocl

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@openshift-cherrypick-robot

@djoshy: new pull request created: #5233

In response to this:

/cherry-pick release-4.19

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@openshift-bot
Copy link
Contributor

[ART PR BUILD NOTIFIER]

Distgit: ose-machine-config-operator
This PR has been included in build ose-machine-config-operator-container-v4.20.0-202508081921.p0.g8f6c60c.assembly.stream.el9.
All builds following this will include this PR.

@djoshy
Copy link
Contributor Author

djoshy commented Aug 10, 2025

/cherry-pick release-4.19

@openshift-cherrypick-robot

@djoshy: new pull request could not be created: failed to create pull request against openshift/machine-config-operator#release-4.19 from head openshift-cherrypick-robot:cherry-pick-5215-to-release-4.19: status code 422 not one of [201], body: {"message":"Validation Failed","errors":[{"resource":"PullRequest","code":"custom","message":"A pull request already exists for openshift-cherrypick-robot:cherry-pick-5215-to-release-4.19."}],"documentation_url":"https://docs.github.com/rest/pulls/pulls#create-a-pull-request","status":"422"}

In response to this:

/cherry-pick release-4.19

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. jira/severity-critical Referenced Jira bug's severity is critical for the branch this PR is targeting. jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. lgtm Indicates that a PR is ready to be merged. qe-approved Signifies that QE has signed off on this PR
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants