Skip to content

[db] Add applicationCluster field to d_b_workspace_cluster table #13722

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Oct 17, 2022

Conversation

andrew-farries
Copy link
Contributor

@andrew-farries andrew-farries commented Oct 10, 2022

Description

As part of #9198 we want to start syncing the d_b_workspace_cluster table with db-sync.

Currently, the table differs between US and EU regions because each table contains only the data relevant to that region. For example, in the EU table, the eu70 workspace cluster is marked as available and the us70 cluster is cordoned. In the US cluster eu70 is cordoned and us70 is available.

In order to sync the table we need to get to a point where there is no difference in the data in the table between EU and US regions.

To do that we will introduce a new field in the table called applicationCluster which records the name of the application cluster to which the record belongs. Thus, for each workspace cluster there will be two rows in Gitpod SaaS:

name applicationCluster url tls state ...
eu70 eu02 url tls info available ...
eu70 us02 url tls info cordoned ...

Effectively the new applicationCluster column gives the table an extra dimension so that we can combine both tables (EU and US) into one.

As a first step towards this, this PR adds the column to the table and makes gpctl fill the value when gpctl registering a new workspace cluster. The value is taken from the GITPOD_INSTALLATION_SHORTNAME environment variable in ws-manager-bridge.

⚠️ Needs https://github.com/gitpod-io/ops/pull/5758 merged first ⚠️

Related Issue(s)

Part of #9198

How to test

I'd like to be able to register a workspace cluster from a preview environment to check that the value of applicationCluster is correctly populated.

Release Notes

NONE

Documentation

Werft options:

  • /werft with-local-preview
    If enabled this will build install/preview
  • /werft with-preview
  • /werft with-integration-tests=all
    Valid options are all, workspace, webapp, ide

@easyCZ
Copy link
Member

easyCZ commented Oct 11, 2022

@andrew-farries Would like to better understand the resulting semantics of this. It feels wrong to have the resulting rows in the table with the source cluster

@geropl
Copy link
Member

geropl commented Oct 11, 2022

@andrew-farries Would like to better understand the resulting semantics of this. It feels wrong to have the resulting rows in the table with the source cluster

@easyCZ You can imagine this to be a directed graph, where we want this table to contain all edges from application to workspace clusters. And each edge contains all parameters to configure the behavior of how the respective application cluster with the workspace cluster.

Does that make more sense?

@werft-gitpod-dev-com
Copy link

started the job as gitpod-build-af-add-column-to-workspace-cluster-table.3 because the annotations in the pull request description changed
(with .werft/ from main)

@werft-gitpod-dev-com
Copy link

started the job as gitpod-build-af-add-column-to-workspace-cluster-table.4 because the annotations in the pull request description changed
(with .werft/ from main)

@geropl
Copy link
Member

geropl commented Oct 12, 2022

@andrew-farries It would be nice if gpctl would display that field as well. But can easily be a follow up PR as long as the DB fields not carry meaning.

@geropl
Copy link
Member

geropl commented Oct 12, 2022

When reviewing I noticed a mismatch between bridge.cfg.installation/GITPOD_INSTALLATION_SHORTNAME ("dev")and the default local bridge name we create (""):
image

This:

  1. does not block this change (only becomes relevant once we start to actively select by applicationCluster)
  2. might be either caused by:
    1. us misconfiguring preview environments
    2. a bug in installer that affects self-hosted as well (:warning: )

@andrew-farries We should definitely investigate before the next Self-Hosted realease. 💯

@werft-gitpod-dev-com
Copy link

started the job as gitpod-build-af-add-column-to-workspace-cluster-table.5 because the annotations in the pull request description changed
(with .werft/ from main)

Copy link
Member

@geropl geropl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you @andrew-farries , the change LGTM!

/hold mainly for this nit, please decide whether you want to address or not

@andrew-farries andrew-farries force-pushed the af/add-column-to-workspace-cluster-table branch from 551b555 to b668b3e Compare October 12, 2022 09:15
@andrew-farries
Copy link
Contributor Author

@andrew-farries It would be nice if gpctl would display that field as well. But can easily be a follow up PR as long as the DB fields not carry meaning.

Good point - will address this in a follow up PR.

@andrew-farries andrew-farries force-pushed the af/add-column-to-workspace-cluster-table branch from b668b3e to 5025e12 Compare October 12, 2022 13:52
@andrew-farries
Copy link
Contributor Author

andrew-farries commented Oct 12, 2022

@andrew-farries We should definitely investigate before the next Self-Hosted realease. 💯

This is interesting. Looking at the installer code, they are both set to InstallationShortName so I'm not sure why one is empty.

Installation: ctx.Config.Metadata.InstallationShortname,

and

Name: ctx.Config.Metadata.InstallationShortname,

@sagor999
Copy link
Contributor

What will happen to existing clusters after migration? I assume they will have an empty field for application cluster? Will this break anything?

Also, how does traffic routing happens with regards to this table? Does it have any affect on it? For example, in US, eu clusters are cordoned, so that no traffic from US gets sent to EU. (at least that is how I understand it). Want to make sure this change will not break anything anywhere else?

Copy link
Contributor

@sagor999 sagor999 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Approving since I am only an owner of one file, so do not want to block this PR.
Would love to receive response to the questions above as a sanity check.

@geropl
Copy link
Member

geropl commented Oct 13, 2022

What will happen to existing clusters after migration? I assume they will have an empty field for application cluster? Will this break anything?

@sagor999 In short: nothing, as a first step. As mentioned in the description, this PR is only about adding said field, without reading it anywhere.

Also, how does traffic routing happens with regards to this table? Does it have any affect on it? For example, in US, eu clusters are cordoned, so that no traffic from US gets sent to EU. (at least that is how I understand it). Want to make sure this change will not break anything anywhere else?

The next step is to check that it's filled correctly (with eu-02 and us-02, respectively). Once that's verified, we can start reading it, so that app cluster eu-02 is only able to read all rows with that applicationCluster='eu-02'. This way, once we (later) start syncing that table, the code inside will still behave just the same as it does now. 👍 But instead of implicitly relying on the fact that we have a separate DB table per app cluster, we made the connection explicit with the applicationCluster column, and all can happily be stored in a single table. 🧘

@geropl
Copy link
Member

geropl commented Oct 13, 2022

This is interesting. Looking at the installer code, they are both set to InstallationShortName so I'm not sure why one is empty.

@andrew-farries Maybe our preview env config code is broken/bad? 🤔

@andrew-farries
Copy link
Contributor Author

andrew-farries commented Oct 17, 2022

This is interesting. Looking at the installer code, they are both set to InstallationShortName so I'm not sure why one is empty.

@andrew-farries Maybe our preview env config code is broken/bad? 🤔

The reason for this mismatch will be addressed by 9773389.

@andrew-farries andrew-farries force-pushed the af/add-column-to-workspace-cluster-table branch 2 times, most recently from 8512929 to 66edaf1 Compare October 17, 2022 14:04
Andrew Farries added 4 commits October 17, 2022 14:23
This field will record to which workspace cluster the information in
each row belongs.
Set the application cluster when registering a new workspace cluster.

The applicationCluster is set to the value of the
GITPOD_INSTALLATION_SHORTNAME env var which must be set in the
environment of ws manager bridge.
@andrew-farries andrew-farries force-pushed the af/add-column-to-workspace-cluster-table branch from 66edaf1 to 80f626a Compare October 17, 2022 14:24
@aledbf
Copy link
Member

aledbf commented Oct 17, 2022

We don't have the shortName value in workspace clusters (yet). That is being added here

Just to confirm, will this not break anything if we don't have such value now?

@andrew-farries
Copy link
Contributor Author

We only need an installation name to be present in application clusters (specifically, accessible to ws-manager-bridge via the GITPOD_INSTALLATION_SHORTNAME environment variable).

I don't think the shortname of the workspace cluster is relevant to this PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
deployed: webapp Meta team change is running in production deployed Change is completely running in production release-note-none size/M team: webapp Issue belongs to the WebApp team
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants