-
Notifications
You must be signed in to change notification settings - Fork 555
OCPBUGS-37982: Bug fix: Reduce Frequency of Update Requests for Copied CSVs #3497
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
01343d6
to
67c82b9
Compare
4bbdc20
to
4b2c88b
Compare
} else { | ||
// Even if they're the same, ensure the returned prototype is annotated. | ||
prototype.Annotations[statusCopyHashAnnotation] = status | ||
updated = prototype | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From the code implemented in this PR to the current state, the main addition seems to be this else block (beyond tests).
I’m not entirely sure I fully understand—are we also looking to implement what’s outlined in the Proposed Fixes
section of this document? How/where are we addressing the concerns raised in the: Why don’t we just merge the [fix PR](https://github.com/operator-framework/operator-lifecycle-manager/pull/3411) as-is?
section?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is first pass, basically, just merge the old PR. With this PR we're taking path of #4
in the scoping doc: merge the PR with some possible problems, they should be a minor use case: users changing the copied CSVs
but the else is not the only thing done here, the main thing added is the tracking hashes so we can tell what's in need of update.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is first pass, basically, just merge the old PR.
We previously agreed that the old PR wasn't quite the right approach, correct? Given that, I’m not sure it makes sense to merge it as-is.
If we need to do a release before we have the proper solution in place, we might include a change we don’t want. That doesn’t seem ideal to me. That is a case that I would request changes since it does not provide the desired solution, or fix the problem accordingly as defined in the doc. See that the doc has a section about that Why don’t we just merge the PR as it is
?
It’s fine to add it as you did, but what do you think about creating a commit on top with the solution we intend to use? Could we focus on implementing the correct fix for the bug instead?
Is there any reason we need to merge the old PR without the correct fix? Can we not do that as proposed?
c/c @tmshort
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please see @tmshort comment on the doc. Idea being, merging this PR is a first step, it gives some relief, then we'll make another pass after this settles. Settling involves seeing much less API activity, especially on clusters with many namespaces for the CSV to be copied to.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
and settling also involves seeing if the things mentioned in the doc, primarily whether OLM not correcting user-modified copied CSVs, will be a real-world problem.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My understanding is we have two problems:
- spamming logs and api server requests
- changes to copied csvs won't be detected and therefore will linger
The proposed approach is to separate these two problems by resolving 1 (which has a material impact on the cluster and customer's bottom line) and later handling 2.
It's my understanding that changes to copied csvs don't carry any behavioral changes in the system anyway. They only exist to make it possible to discover which operators are available in a particular namespace with a kubectl command. Also, I'd assume that write access to CSVs will be restricted in most real world cases to the cluster admin and the namespace admin. If these two assumptions hold, I think the blast radius of modifying the copied CSV and not having it reconcile back to the intended spec should be pretty small. So, I tend to agree with the approach here. Let's address the big problem of api server/log spamming, then worry about the relatively small problem of inconsistent copied csvs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think there might be some simplification that can be done with the setting of the status/nonstatus annotations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm assuming all the changes here are due to lint?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah, and I just ran make lint
locally to make sure nothing changed. nothing changed.
@@ -803,6 +808,7 @@ func (a *Operator) copyToNamespace(prototype *v1alpha1.ClusterServiceVersion, ns | |||
|
|||
existing, err := a.copiedCSVLister.Namespace(nsTo).Get(prototype.GetName()) | |||
if apierrors.IsNotFound(err) { | |||
prototype.Annotations[nonStatusCopyHashAnnotation] = nonstatus |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because copyToNamespace
is called in a loop, prototype
, being a pointer, is reused multiple times. Which means that these annotations may already be set. Is there any reason why these annotations simply aren't set in ensureCSVsInNamesapces()
where the hashes are calculcated?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good point possibly. checking...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So looking at it closer it seems like we shouldn't change it, here's my reasoning:
Keeping the annotation logic here, in copyToNamespace()
, encapsulate the update semantics so each call handles its own CSV's state reliably.
We're reusing prototype and accounting for possibly set annotations. If we move the logic to ensureCSVsInNamesapces()
, we'll have to duplicate the annotation checking logic because the logic for handling those annotations is tightly coupled with the CSV’s create/update lifecycle.
In copyToNamespace()
we need to:
• Distinguish between a new creation (where the annotations don’t exist yet) and an update (where the annotations might already be set but could be outdated).
• Apply the updates in a specific order (first updating the non-status hash, then the status hash, including a status update to avoid mismatches).
• Ensure that each target CSV reflects the current state as expected for that specific namespace.
Aside from the hash handling we'd still need to be doing the above work in copyToNamespace()
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we address the properly solution on this one as defined in the doc instead of get merged the old pr fix? See: #3497 (comment)
I do not want to block anything, request changes regards: #3497 (review) (so dismissing)
by adding annotations to copied CSVs that are populated with hashes of the non-status fields and the status fields. This seems to be how this was intended to work, but was not actually working this way because the annotations never actually existed on the copied CSV. This resulted in a hot loop of update requests being made on all copied CSVs. Signed-off-by: everettraven <[email protected]>
Signed-off-by: everettraven <[email protected]>
Signed-off-by: everettraven <[email protected]>
Signed-off-by: Brett Tofel <[email protected]>
Since we switched to a PartialObjectMetadata cache to save memory, we lost visibility into copied CSV spec and status fields, and the reintroduced nonStatusCopyHash/statusCopyHash annotations only partially solved the problem. Manual edits to a copied CSV could still go undetected, causing drift without reconciliation. This commit adds two new annotations: olm.operatorframework.io/observedGeneration and olm.operatorframework.io/observedResourceVersion. It implements a mechanism to guard agains metadata drift at the top of the existing-copy path in copyToNamespace. If a stored observedGeneration or observedResourceVersion no longer matches the live object, the operator now: • Updates the spec and hash annotations • Updates the status subresource • Records the new generation and resourceVersion in the guard annotations Because the guard only fires when its annotations are already present, all existing unit tests pass unchanged. We preserve the memory benefits of the metadata‐only informer, avoid extra GETs, and eliminate unnecessary API churn. Future work may explore a WithTransform informer to regain full object visibility with minimal memory impact. Signed-off-by: Brett Tofel <[email protected]>
Verifies that exactly three updates (spec, status, guard) are issued when the observedGeneration doesn’t match. Signed-off-by: Brett Tofel <[email protected]>
In general, I think it lgtm. I'm curious why we need the annotations at all. Would it be sufficient to do something like: generate the desired copied csv from the parent csv I think the answer might be because the sync function is being driven off parent/non-copied csv events. Even so, we should be able to use the general approach outlined above (minus the returns). wdyt? I guess the only major difference is swapping memory (storing the hashes in the annotations) for computation (calculating both desired and current hashes). Maybe that's cleaner and adds fewer implementation concerns to the copied csv? |
I've not looked at the exact changes in the PR, but having been originally involved in bug when it first rolled around the reason I had determined we couldn't do this is because we only ever cached the copied CSV metadata and not the whole copied CSV. IIRC the bug was originally concerned with the numerous update calls, which were happening because we always decided we needed to update the CSVs. At the time, it was also determined that we could change what we cached because edge-computing solutions like MicroShift needed the reduction in memory footprint to run OLM. Because of this, you would always have to make a GET call to the kube-apiserver for every copied CSV you need to evaluate. This could lead to at worst making twice the API calls we were already making -- a GET followed by an UPDATE as opposed to always issuing an UPDATE. At best, this was exactly the same -- always making a GET request. If the constraints of not being able to change how the data is cached are still in play, you'll end up with the same problem. The bug was originally filed as a "OLM makes too many calls to the Kubernetes API server, inflating the audit logs and thus our log ingestion/monitoring bill". Switching from always issuing an UPDATE to always issuing a GET and possibly an UPDATE would make this worse than just not fixing the bug. |
Gotcha. I missed that we weren't doing a live GET call. But, rather pulling it from cache. That all makes sense. Thank you =D |
prototype.UID = existing.UID | ||
// sync hash annotations | ||
prototype.Annotations[nonStatusCopyHashAnnotation] = nonstatus | ||
prototype.Annotations[statusCopyHashAnnotation] = status |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we update the status hash post status update, i.e. with the observed resource version and generation annotations (for the same reason described in line 897)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done in soon to be pushed commit 4b21aa6e
It would be cool if we could add a regression test of some sort to measure the number of api server requests or log size or something that helps us verify/measure the effectiveness of the fix (and make sure we don't fall in the same trap again). Though, only if there is a relatively easy way to do it. Otherwise, could we post up some manual verification? i.e. pre-fix logs are big, post-fix logs are ok. Or maybe grabbing some api server stats or something...? wdyt? |
@per replying to your comment:
Why annotations vs “just re-hash everything on the fly”
|
Description of the change:
Please checkout this doc on scoping out this change: https://docs.google.com/document/d/1P4cSYEP05vDyuhBfilyuWgL5d5OOD5z7JlAyOxpPqps
In this PR we are resurrecting #3411 with the intent to fix what that PR was originally going to fix. Follow on work will address the then revealed problem with
metadata.generation|resourceVersion
as per the doc comment by @tmshortMotivation for the change:
[from the linked doc, "How Did We Get Here:]
Architectural changes:
Testing remarks:
Except from the expected changes around the inability to track copied CSV changes made by a user, we should be careful to test the following:
Reviewer Checklist
/doc
[FLAKE]
are truly flaky and have an issue