Description
Problem description
When a helm chart contains helm hooks defined as Jobs, then helm diff is not behaving as expected when used with --take-ownership
flag:
1. When no changes are made on the chart, the chart is marked as having changes anyway
Correct behavior without --take-ownership
flag (empty)
helm diff upgrade helm-diff-repro . --install --debug
Executing helm version
Executing helm get manifest helm-diff-repro --namespace default
Executing helm get values helm-diff-repro --output yaml --all
Executing helm version
Executing helm template helm-diff-repro . --namespace default --values /var/folders/w2/1243kx491313m323bfpyn50w0000gn/T/existing-values2739512397 --validate --is-upgrade --dry-run=client
Executing helm get hooks helm-diff-repro --namespace default
Incorrect behavior with --take-ownership
flag
helm diff upgrade helm-diff-repro . --install --take-ownership --debug
Executing helm version
Executing helm get manifest helm-diff-repro --namespace default
Executing helm get values helm-diff-repro --output yaml --all
Executing helm version
Executing helm template helm-diff-repro . --namespace default --values /var/folders/w2/1243kx491313m323bfpyn50w0000gn/T/existing-values3415737044 --take-ownership --validate --is-upgrade --dry-run=client
default, helm-diff-repro-hook, Job (batch) changed ownership:
-
+ default/helm-diff-repro
We are using helmfile, and the problem causes creation of new release for all helm charts which contain any helm hooks, even when they did not change
2. When the helm hook contains any changes, then helm diff command fails
Correct behavior without --take-ownership
flag
helm diff upgrade helm-diff-repro . --install --debug
Executing helm version
Executing helm get manifest helm-diff-repro --namespace default
Executing helm get values helm-diff-repro --output yaml --all
Executing helm version
Executing helm template helm-diff-repro . --namespace default --values /var/folders/w2/1243kx491313m323bfpyn50w0000gn/T/existing-values3415473100 --validate --is-upgrade --dry-run=client
Executing helm get hooks helm-diff-repro --namespace default
default, helm-diff-repro-hook, Job (batch) has changed:
# Source: test-chart/templates/deployment.yaml
kind: Job
apiVersion: batch/v1
metadata:
name: helm-diff-repro-hook
annotations:
"helm.sh/hook": pre-install,pre-upgrade
spec:
template:
spec:
containers:
- name: helm-diff-repro-hook
image: nginx
- command: ["/bin/sh", "-c", "echo 'Hello, World!'"]
+ command: ["/bin/sh", "-c", "echo 'Hello, World! 1'"]
restartPolicy: Never
Incorrect behavior (crash) with --take-ownership
flag
helm diff upgrade helm-diff-repro . --install --take-ownership --debug
Executing helm version
Executing helm get manifest helm-diff-repro --namespace default
Executing helm get values helm-diff-repro --output yaml --all
Executing helm version
Executing helm template helm-diff-repro . --namespace default --values /var/folders/w2/1243kx491313m323bfpyn50w0000gn/T/existing-values854516463 --take-ownership --validate --is-upgrade --dry-run=client
Error: unable to generate manifests: cannot patch "helm-diff-repro-hook" with kind Job: Job.batch "helm-diff-repro-hook" is invalid: spec.template: Invalid value: core.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"batch.kubernetes.io/controller-uid":"eb0265ed-b7eb-4e12-b09f-7896f25fbec4", "batch.kubernetes.io/job-name":"helm-diff-repro-hook", "controller-uid":"eb0265ed-b7eb-4e12-b09f-7896f25fbec4", "job-name":"helm-diff-repro-hook"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:core.PodSpec{Volumes:[]core.Volume(nil), InitContainers:[]core.Container(nil), Containers:[]core.Container{core.Container{Name:"helm-diff-repro-hook", Image:"nginx", Command:[]string{"/bin/sh", "-c", "echo 'Hello, World! 1'"}, Args:[]string(nil), WorkingDir:"", Ports:[]core.ContainerPort(nil), EnvFrom:[]core.EnvFromSource(nil), Env:[]core.EnvVar(nil), Resources:core.ResourceRequirements{Limits:core.ResourceList(nil), Requests:core.ResourceList(nil), Claims:[]core.ResourceClaim(nil)}, ResizePolicy:[]core.ContainerResizePolicy(nil), RestartPolicy:(*core.ContainerRestartPolicy)(nil), VolumeMounts:[]core.VolumeMount(nil), VolumeDevices:[]core.VolumeDevice(nil), LivenessProbe:(*core.Probe)(nil), ReadinessProbe:(*core.Probe)(nil), StartupProbe:(*core.Probe)(nil), Lifecycle:(*core.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*core.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]core.EphemeralContainer(nil), RestartPolicy:"Never", TerminationGracePeriodSeconds:(*int64)(0x400ccecc50), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", SecurityContext:(*core.PodSecurityContext)(0x4014d12cf0), ImagePullSecrets:[]core.LocalObjectReference(nil), Hostname:"", Subdomain:"", SetHostnameAsFQDN:(*bool)(nil), Affinity:(*core.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]core.Toleration(nil), HostAliases:[]core.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), PreemptionPolicy:(*core.PreemptionPolicy)(nil), DNSConfig:(*core.PodDNSConfig)(nil), ReadinessGates:[]core.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), Overhead:core.ResourceList(nil), EnableServiceLinks:(*bool)(nil), TopologySpreadConstraints:[]core.TopologySpreadConstraint(nil), OS:(*core.PodOS)(nil), SchedulingGates:[]core.PodSchedulingGate(nil), ResourceClaims:[]core.PodResourceClaim(nil)}}: field is immutable
Error: plugin "diff" exited with error
helm.go:86: 2025-05-19 11:25:53.904139 +0200 CEST m=+0.367769876 [debug] plugin "diff" exited with error
Workaround
Currently we implemented a workaround by using --no-hooks
flag. Using this flag has its own implications, but it works for our use case
Repro steps
To reproduce the issue I used a simple helm chart containing only one yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: helm-diff-repro
spec:
selector:
matchLabels:
app: helm-diff-repro
template:
metadata:
labels:
app: helm-diff-repro
spec:
containers:
- name: helm-diff-repro
image: nginx
---
kind: Job
apiVersion: batch/v1
metadata:
name: helm-diff-repro-hook
annotations:
"helm.sh/hook": pre-install,pre-upgrade
spec:
template:
spec:
containers:
- name: helm-diff-repro-hook
image: nginx
command: ["/bin/sh", "-c", "echo 'Hello, World!'"]
restartPolicy: Never
Then simply install this chart to a kubernetes cluster.
After that the Problem 1. will be reproducible.
To reproduce Problem 2. make any changes in the helm hook manifest - I modified the helm hook job command
Tested versions
helm version
version.BuildInfo{Version:"v3.17.3", GitCommit:"e4da49785aa6e6ee2b86efd5dd9e43400318262b", GitTreeState:"clean", GoVersion:"go1.24.2"}
helm diff version
3.11.0