Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
103 changes: 103 additions & 0 deletions install/0000_80_machine-config_00_v1alpha1-mcn-cleanup-job.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,103 @@
apiVersion: batch/v1
kind: CronJob
metadata:
name: machine-config-nodes-crd-cleanup
namespace: openshift-machine-config-operator
annotations:
include.release.openshift.io/self-managed-high-availability: "true"
include.release.openshift.io/single-node-developer: "true"
include.release.openshift.io/ibm-cloud-managed: "true"
release.openshift.io/feature-set: Default
# This prevent an update of this cronjob once the child job suspends on a successful run
release.openshift.io/create-only: "true"
spec:
# Run every minute initially to trigger an immediate run
schedule: "* * * * *"
# Don't suspend initially - let it run once
suspend: false
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does this re-set every upgrade then? Since we patch it in the actual bash script, I thought the CVO would try to rectify back to this template?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, I think the create-only annotation should prevent updates to this resource.

# Only allow 1 concurrent job and prevent overlapping
concurrencyPolicy: Forbid
jobTemplate:
spec:
backOffLimit: 3
template:
metadata:
labels:
app: machine-config-nodes-crd-cleanup
annotations:
target.workload.openshift.io/management: '{"effect": "PreferredDuringScheduling"}'
openshift.io/required-scc: nonroot-v2
spec:
serviceAccountName: machine-config-operator
restartPolicy: OnFailure
containers:
- name: crd-cleanup
image: placeholder.url.oc.will.replace.this.org/placeholdernamespace:rhel-coreos
terminationMessagePolicy: FallbackToLogsOnError
command:
- /bin/bash
- -c
- |
set -euo pipefail

# Set trap to suspend cronjob on successful exit (exit code 0)
trap 'if [ $? -eq 0 ]; then echo "Suspending cronjob..."; oc patch cronjob machine-config-nodes-crd-cleanup -p "{\"spec\":{\"suspend\":true}}" --field-manager=machine-config-operator || echo "Failed to suspend cronjob"; fi' EXIT

CRD_NAME="machineconfignodes.machineconfiguration.openshift.io"

echo "Checking for MachineConfigNodes CRD with v1alpha1 version..."

# Check if CRD exists
if ! oc get crd "$CRD_NAME" >/dev/null 2>&1; then
echo "CRD $CRD_NAME does not exist, nothing to clean up"
exit 0
fi

# Check if CRD has v1alpha1 version
HAS_V1ALPHA1=$(oc get crd "$CRD_NAME" -o jsonpath='{.spec.versions[?(@.name=="v1alpha1")].name}' 2>/dev/null || echo "")

if [ -z "$HAS_V1ALPHA1" ]; then
echo "CRD $CRD_NAME does not have v1alpha1 version, nothing to clean up"
exit 0
fi

echo "Found CRD $CRD_NAME with v1alpha1 version, deleting it..."

# Delete the CRD
if oc delete crd "$CRD_NAME"; then
echo "Successfully deleted CRD $CRD_NAME"
else
echo "Failed to delete CRD $CRD_NAME"
exit 1
fi

echo "CRD cleanup completed successfully"
resources:
requests:
cpu: 10m
memory: 50Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
securityContext:
runAsNonRoot: true
runAsUser: 65534
seccompProfile:
type: RuntimeDefault
nodeSelector:
node-role.kubernetes.io/control-plane: ""
priorityClassName: "system-cluster-critical"
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
operator: Exists
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 120
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 120