-
Notifications
You must be signed in to change notification settings - Fork 47
Rebase to v1.5.0 #443
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rebase to v1.5.0 #443
Conversation
…roller-integtest Add integration tests for AWSMachine controller
Fix lint errors due to golangci-lint bump
Bumps [k8s.io/klog/v2](https://github.com/kubernetes/klog) from 2.50.0 to 2.60.1. - [Release notes](https://github.com/kubernetes/klog/releases) - [Changelog](https://github.com/kubernetes/klog/blob/main/RELEASE.md) - [Commits](kubernetes/klog@v2.50.0...v2.60.1) --- updated-dependencies: - dependency-name: k8s.io/klog/v2 dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <[email protected]>
…ot/go_modules/k8s.io/klog/v2-2.60.1 build(deps): bump k8s.io/klog/v2 from 2.50.0 to 2.60.1
Bumps [k8s.io/klog/v2](https://github.com/kubernetes/klog) from 2.40.1 to 2.60.1. - [Release notes](https://github.com/kubernetes/klog/releases) - [Changelog](https://github.com/kubernetes/klog/blob/main/RELEASE.md) - [Commits](kubernetes/klog@v2.40.1...v2.60.1) --- updated-dependencies: - dependency-name: k8s.io/klog/v2 dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <[email protected]>
…ot/go_modules/hack/tools/k8s.io/klog/v2-2.60.1 build(deps): bump k8s.io/klog/v2 from 2.40.1 to 2.60.1 in /hack/tools
Signed-off-by: Meghana Jangi <[email protected]> Co-authored-by: sedefsavas <[email protected]>
…ot/go_modules/github.com/go-logr/logr-1.2.3 build(deps): bump github.com/go-logr/logr from 1.2.2 to 1.2.3
This changed adds support for specifying the configuration for nodegroup update, specifically allowing you to tailor how many nodes (as a specific number or percentage) can be unavailable when updating the node group. Signed-off-by: Richard Case <[email protected]>
typo CR feedback
…ranch fix typo
…tomize-install-instructions-branch update kustomize install instructions
…release-notes-generation Change the release process to use GitHub Release Notes
feat: add nodegroup update config support
…docs update multi-tenacy docs
netlify: fix missing go.sum entry for blang/semver
Bumps [github.com/google/go-cmp](https://github.com/google/go-cmp) from 0.5.6 to 0.5.7. - [Release notes](https://github.com/google/go-cmp/releases) - [Commits](google/go-cmp@v0.5.6...v0.5.7) --- updated-dependencies: - dependency-name: github.com/google/go-cmp dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]>
…ot/go_modules/github.com/google/go-cmp-0.5.7 build(deps): bump github.com/google/go-cmp from 0.5.6 to 0.5.7
Bumps [sigs.k8s.io/kustomize/kustomize/v4](https://github.com/kubernetes-sigs/kustomize) from 4.5.2 to 4.5.3. - [Release notes](https://github.com/kubernetes-sigs/kustomize/releases) - [Commits](kubernetes-sigs/kustomize@kustomize/v4.5.2...kustomize/v4.5.3) --- updated-dependencies: - dependency-name: sigs.k8s.io/kustomize/kustomize/v4 dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]>
Bumps [github.com/onsi/gomega](https://github.com/onsi/gomega) from 1.18.1 to 1.19.0. - [Release notes](https://github.com/onsi/gomega/releases) - [Changelog](https://github.com/onsi/gomega/blob/master/CHANGELOG.md) - [Commits](onsi/gomega@v1.18.1...v1.19.0) --- updated-dependencies: - dependency-name: github.com/onsi/gomega dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <[email protected]>
Signed-off-by: Prajyot-Parab <[email protected]>
…ot/go_modules/github.com/onsi/gomega-1.19.0 build(deps): bump github.com/onsi/gomega from 1.18.1 to 1.19.0
Bump to golangci-lint v1.45.2
Fix flaky integration test
This introduces a new garbage collection service which will be used later by the controllers to do clean up of AWS resources for child/tenant clusters that where created via the CCM. Initially we support clearing up load balancers (classic elb, nlb, alb) and security groups. It works in 2 phases depending on if a child cluster is being created/update (i.e. **Reconcile**) or deleted (i.e. **ReconcileDelete**). When a cluster is created the cluster controllers will call **Reconcile**. Its purpose is to deteremine if the cluster should be garbage collected and if it should then it needs marked. If the gc feature is enabled we will operate an opt-out model, so all clusters will be garbage collected unless they exlicitly opt out. To opt out the infra cluster must have the `aws.cluster.x-k8s.io/external-resource-gc` annotation with a value of false. Otherwise a cluster is marked as requiring gc by adding the `awsexternalresourcegc.infrastructure.cluster.x-k8s.io` finalizer to the infra cluster. When a cluster is deleted the cluster controllers will call **ReconcileDelete**. The first job is to identify the AWS resources that where created by the CCM in the child/tenant cluster. This is done by using the AWS resource tagging api and getting resources with the kubernetes cluster owned label. The resources that are returned are then put into buckets for each AWS resource type (i.e. ec2, elasticloadbalancing) and then these buckets of resource ARNs are passed to a function for that specific AWS service which will do teh actual API calls to clear up the AWS resources. The reason we use buckets is that the order in which you delete services can matter, for example load balancers must be deleted before target groups. The service will be use by a later change to the controllers. Signed-off-by: Richard Case <[email protected]>
Changed the gc service based on the lateset proposal update & review feedback. It now uses a model where "the user decides whether to enable the gc at any time before they delete the cluster.". This translates into the `ReconcileDelete` now: - Checks to see ig the cluster should be garbage collected by looking at the gc annotation (if it exists). - Does resource cleanup Signed-off-by: Richard Case <[email protected]>
The the cleanup functions now accepted the full list of AWS resources and then they filter which resources they want to delete themselves. Signed-off-by: Richard Case <[email protected]>
…o-capi-1.1.5 [release-1.5] Bump cluster-api to v1.1.5
…service_1-5 [release-1.5] feat: external load balancer garbage collection (part 2) - new gc service
This change uses the new garbage collection service enables this during the reconciliation of `AWSCluster` and `AWSManagedControlPlane`. Its enabled via a new feature flag `ExternalResourceGC` which is disabled by default. If the feature flag is enabled then the the gc service is called in `reconcileDelete` for the infra clusters. The actual gc service does the work of cleanup. New commands have been added to `clusterawsadm` to allow users to opt-in/out an already existing cluster from garbage collection. Additionally, with the new mocks folder introduced with the gc service the existing mocks have been deleted and tests/controllers updated. Signed-off-by: Richard Case <[email protected]>
Various changes as a result of review feedback. Signed-off-by: Richard Case <[email protected]>
…ot/cherry-pick-3633-to-release-1.5 [release-1.5] feat: external load balancer garbage collection (part 3) - add gc to reconciliation
This change introduces new e2e tests that test the new garbage collection feature. There are tests for both managed and unmanaged clusters. These are separate test suites whilst the GC feature is experimental as we need to enable this feature just for our tests without effecting the existing e2e tests. When this feature moves out of experimental we will merge these tests into the existing managed & unmanaged suites. Signed-off-by: Richard Case <[email protected]>
…ot/cherry-pick-3648-to-release-1.5 [release-1.5] feat: external load balancer garbage collection (part 4) - e2e tests
/retest |
No techpreview E2E jobs running? Would be good to get those to make sure this doesn't break anything |
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: Fedosin The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/lgtm |
@alexander-demichev: all tests passed! Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
This commit rebases the cluster-api-provider-aws openshift patches on top of the kubernetes-sigs/cluster-api-provider-aws main branch after.
There are several commits that we carry on top of the upstream cluster-api-provider-azure and the rebase process allows us to preserve those. Here is a description of the process I used to create this PR.
(replicated @elmiko's process within openshift/kubernetes-autoscaler)
Process
First we need to identify the carry commits that we currently have, this is done against our previous rebase (or fork in this case) to catch new changes. Once identified we will drop commits which have merged upstream and only carry unique commits. (see below for the carried and dropped commits).
Identify carry commits:
After identifying the carry commits, the next step is to create the new commit-tree that will be used for the rebase and then cherry pick the carry commits into the new branch. The following commands cover these steps:
With the rebase-upstream-latest branch in place, I cherry picked the carry commits which we should carry.
Carried Commits
These commits are integral to our CI platform, or are specific to the releases we create for OpenShift.
Changed Commits