Skip to content

Conversation

MaciekPytel
Copy link
Contributor

This allows customizing the balancing logic for different use-cases. In particular this PR implements GKE specific version (only enabled if provider is gke) that considers node groups with the same gke-nodepool as similar.

@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Oct 23, 2018
The goal is to allow customization of this logic
for different use-case and cloudproviders.
Also refactor Balancing processor a bit to make it easily extensible.
@MaciekPytel MaciekPytel force-pushed the gke_nodegroup_balancing branch from 43dea94 to 01a56a8 Compare October 25, 2018 16:51
@k8s-ci-robot k8s-ci-robot added size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. and removed size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Oct 25, 2018
@MaciekPytel
Copy link
Contributor Author

@losipiuk Refactored based on our conversation, PTAL.

Copy link
Contributor

@losipiuk losipiuk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nits and questions.

}

// FindSimilarNodeGroups returns a list of NodeGroups similar to the one provided in parameter.
func (b *BalancingNodeGroupSetProcessor) FindSimilarNodeGroups(context *context.AutoscalingContext, nodeGroup cloudprovider.NodeGroup,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: if you are breaking argument list put each arg in separate line


// BalanceScaleUpBetweenGroups splits a scale-up between provided NodeGroups.
func (b *BalancingNodeGroupSetProcessor) BalanceScaleUpBetweenGroups(context *context.AutoscalingContext, groups []cloudprovider.NodeGroup, newNodes int) ([]ScaleUpInfo, errors.AutoscalerError) {
return BalanceScaleUpBetweenGroups(groups, newNodes)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Move BalanceScaleUpBetweenGroups from scale_up.go to this file

continue
}
comparator := b.Comparator
if comparator == nil {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would rather not have this logic here and treat Comparator as obligatory paremetrization.

glog.Warningf("Failed to find nodeInfo for group %v", ngId)
continue
}
comparator := b.Comparator
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why copy to variable?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's a standard coding pattern for applying defaults :p

n1 := BuildTestNode("node1", 1000, 2000)
n2 := BuildTestNode("node2", 1000, 2000)
checkNodesSimilar(t, n1, n2, true)
checkNodesSimilar(t, n1, n2, IsNodeInfoSimilar, true)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what is the reason for passing same comparator for every test?
Drop argument?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok I see the other test.

if nodesFromSameGkeNodePool(n1, n2) {
return true
}
return IsNodeInfoSimilar(n1, n2)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or maybe return false? WDYT? Too much a change of semantics?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good question. I don't think it matters much because it's very hard to have similar MIGs in different node groups. That being said I see some potential use-case (discussed offline) and I don't see any negative effects of balancing such MIGs (assuming they even exist), so let's leave it as is.

@losipiuk
Copy link
Contributor

/lgtm
/approve
/hold

@k8s-ci-robot k8s-ci-robot added lgtm "Looks good to me", indicates that a PR is ready to be merged. do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. labels Oct 25, 2018
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: losipiuk

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Oct 25, 2018
@losipiuk
Copy link
Contributor

@MaciekPytel unhold if you do are not willing to do any changes. Everything is optional from my side.

@MaciekPytel
Copy link
Contributor Author

/hold cancel

@k8s-ci-robot k8s-ci-robot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Oct 26, 2018
@k8s-ci-robot k8s-ci-robot merged commit f341d8a into kubernetes:master Oct 26, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. area/cluster-autoscaler cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants