-
Notifications
You must be signed in to change notification settings - Fork 137
tapdb: retroactively insert old asset burns into burn table #1612
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Not yet, as we haven't hooked up inserting elements into the tree yet. We also haven't implemented retroactively creation of supply updates into the tree. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 🩱
burn_id INTEGER PRIMARY KEY, | ||
|
||
-- A reference to the primary key of the transfer that includes this burn. | ||
transfer_id BIGINT NOT NULL REFERENCES asset_transfers(id), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So we only need to make this BIGINT
for foreign key refs? Do we need to do this anywhere else? '
I think in some of my recent PRs, I don't declare foreign key refs as BIGINT
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps we should add a new linting script for this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, if the primary key that is referenced is INTEGER PRIMARY KEY
then it will become a BIGSERIAL PRIMARY KEY
on postgres and the foreign key should be BIGINT to match that (you'll mainly notice because you have to cast the ID from int32 to int64 and back in the tapdb
Golang code.
I did a quick scan and the only other instance I found was mint_anchor_uni_commitments.batch_id
. Perhaps this can be updated in one of the in-flight PRs?
I also started looking into sqlfluff
that can do linting and also custom plugins, so we could add our own rule. Will create a PR soon.
Pull Request Test Coverage Report for Build 15845300932Details
💛 - Coveralls |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks for the fix!
This aligns the foreign key to be the same type as the referenced table's primary key. This was an oversight that resulted from our type replacement map we use to change the types of our primary keys to be compatible with what SQLite and Postgres support. So even though all our primary keys in the SQL files are defined as INTEGER PRIMARY KEY, they are rewritten to BIGSERIAL PRIMARY KEY in Postgres to allow auto increment, which corresponds to a BIGINT. See scripts/gen_sqlc_docker.sh for a more in-depth explanation.
We only added the burn table in the most recent release. But users might have burned assets before, which wouldn't be shown when listing burns (only with is_burn=true when listing assets). This migration retroactively inserts burned assets into the burn table that aren't there yet.
We added the dedicated burn table after introducing the burn functionality. Which means, some asset burns users made before the table was added didn't show up when querying burns with
tapcli assets listburns
(only when listing all assets, theis_burn: true
would show in the output).This fixes the problem by inserting any asset burns into the dedicated table.
We also fix an issue with the type of the foreign key.
@Roasbeef does this also have an impact on the burn tree? Should we also insert the burns there?