-
Notifications
You must be signed in to change notification settings - Fork 4.2k
Move nodegroup balancing to processor, add GKE-specific implementation #1341
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Move nodegroup balancing to processor, add GKE-specific implementation #1341
Conversation
The goal is to allow customization of this logic for different use-case and cloudproviders.
Also refactor Balancing processor a bit to make it easily extensible.
43dea94
to
01a56a8
Compare
@losipiuk Refactored based on our conversation, PTAL. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nits and questions.
} | ||
|
||
// FindSimilarNodeGroups returns a list of NodeGroups similar to the one provided in parameter. | ||
func (b *BalancingNodeGroupSetProcessor) FindSimilarNodeGroups(context *context.AutoscalingContext, nodeGroup cloudprovider.NodeGroup, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: if you are breaking argument list put each arg in separate line
|
||
// BalanceScaleUpBetweenGroups splits a scale-up between provided NodeGroups. | ||
func (b *BalancingNodeGroupSetProcessor) BalanceScaleUpBetweenGroups(context *context.AutoscalingContext, groups []cloudprovider.NodeGroup, newNodes int) ([]ScaleUpInfo, errors.AutoscalerError) { | ||
return BalanceScaleUpBetweenGroups(groups, newNodes) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Move BalanceScaleUpBetweenGroups from scale_up.go
to this file
continue | ||
} | ||
comparator := b.Comparator | ||
if comparator == nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would rather not have this logic here and treat Comparator as obligatory paremetrization.
glog.Warningf("Failed to find nodeInfo for group %v", ngId) | ||
continue | ||
} | ||
comparator := b.Comparator |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why copy to variable?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's a standard coding pattern for applying defaults :p
n1 := BuildTestNode("node1", 1000, 2000) | ||
n2 := BuildTestNode("node2", 1000, 2000) | ||
checkNodesSimilar(t, n1, n2, true) | ||
checkNodesSimilar(t, n1, n2, IsNodeInfoSimilar, true) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what is the reason for passing same comparator for every test?
Drop argument?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok I see the other test.
if nodesFromSameGkeNodePool(n1, n2) { | ||
return true | ||
} | ||
return IsNodeInfoSimilar(n1, n2) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Or maybe return false? WDYT? Too much a change of semantics?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good question. I don't think it matters much because it's very hard to have similar MIGs in different node groups. That being said I see some potential use-case (discussed offline) and I don't see any negative effects of balancing such MIGs (assuming they even exist), so let's leave it as is.
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: losipiuk The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@MaciekPytel unhold if you do are not willing to do any changes. Everything is optional from my side. |
/hold cancel |
This allows customizing the balancing logic for different use-cases. In particular this PR implements GKE specific version (only enabled if provider is gke) that considers node groups with the same gke-nodepool as similar.