Skip to content

colexechash: improve memory accounting in the hash table #84229

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jul 13, 2022

Conversation

yuzefovich
Copy link
Member

@yuzefovich yuzefovich commented Jul 12, 2022

This commit improves the memory accounting in the hash table to be more
precise in the case when the distsql_workmem limit is exhausted.
Previously, we would allocate large slices first only to perform the
memory accounting after the fact, possibly running out of budget which
would result in a error being thrown. We'd end up in a situation where
the hash table is still referencing larger newly-allocated slices while
only the previous memory usage is accounted for. This commit makes it so
that we account for the needed capacity upfront, then perform the
allocation, and then reconcile the accounting if necessary. This way
we're much more likely to encounter the budget error before making the
large allocations.

Additionally, this commit accounts for some internal slices in the hash
table used only in the hash joiner case.

Also, now both the hash aggregator and the hash joiner eagerly release
references to these internal slices of the hash table when the spilling
to disk occurs (we cannot do the same for the unordered distinct because
there the hash table is actually used after the spilling too).

This required a minor change to the way the unordered distinct spills to
disk. Previously, the memory error could only occur in two spots (and
one of those would leave the hash table in an inconsistent state and we
were "smart" in how we repaired that). However, now the memory error
could occur in more spots (and we could have several different
inconsistent states), so this commit chooses a slight performance
regression of simply rebuilding the hash table from scratch, once, when
the unordered distinct spills to disk.

Addresses: #60022.
Addresses: #64906.

Release note: None

@cockroach-teamcity
Copy link
Member

This change is Reviewable

@yuzefovich yuzefovich force-pushed the hashtable-accounting branch 4 times, most recently from ffe43c8 to 6da0f02 Compare July 12, 2022 15:37
@yuzefovich yuzefovich requested review from michae2, DrewKimball and a team July 12, 2022 15:37
@yuzefovich yuzefovich marked this pull request as ready for review July 12, 2022 15:37
Copy link
Collaborator

@michae2 michae2 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:lgtm: Nice!

Reviewed 3 of 5 files at r1, 2 of 2 files at r2, all commit messages.
Reviewable status: :shipit: complete! 1 of 0 LGTMs obtained (waiting on @DrewKimball and @yuzefovich)


pkg/sql/colexec/colexechash/hashtable.go line 453 at r2 (raw file):

	ht.buildFromBufferedTuplesNoAccounting()
	// Now ensure that the accounting is precise (cap's of the slices might
	// exceed len's that we've accounted for). Note that

nit: Missing the rest of this sentence.


pkg/sql/colexec/colexechash/hashtable.go line 1011 at r2 (raw file):

	if ht.ProbeScratch.limitedSlicesAreAccountedFor {
		ht.ProbeScratch = hashTableProbeBuffer{}
		ht.allocator.ReleaseMemory(probeBufferInternalMaxMemUsed())

nit: Would it make sense to also set ht.ProbeScratch.limitedSlicesAreAccountedFor = false here? I know we're not currently reusing the hash table after calling Release but it seems like it isn't impossible that someone would one day.

Copy link
Collaborator

@DrewKimball DrewKimball left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:lgtm:

Reviewable status: :shipit: complete! 2 of 0 LGTMs obtained (waiting on @michae2 and @yuzefovich)


pkg/sql/colexec/colexechash/hashtable.go line 1001 at r2 (raw file):

// append-only batch.
// TODO(yuzefovich): lose the reference to the Vals as well, when possible.
func (ht *HashTable) Release() {

Should we also be releasing the slices in the DatumAlloc struct?

[nit] It might be good to nil out the remaining slices in the HashTable, or alternatively add a comment explaining that they're small enough not to matter.


pkg/sql/colexec/colexechash/hashtable.go line 1011 at r2 (raw file):

Previously, michae2 (Michael Erickson) wrote…

nit: Would it make sense to also set ht.ProbeScratch.limitedSlicesAreAccountedFor = false here? I know we're not currently reusing the hash table after calling Release but it seems like it isn't impossible that someone would one day.

That happens when we set ht.ProbeScratch = hashTableProbeBuffer{} but I agree it's not obvious; a comment might be a good idea here.

@yuzefovich yuzefovich force-pushed the hashtable-accounting branch from 6da0f02 to cedd0df Compare July 12, 2022 23:50
Copy link
Member Author

@yuzefovich yuzefovich left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: :shipit: complete! 0 of 0 LGTMs obtained (and 2 stale) (waiting on @DrewKimball and @michae2)


pkg/sql/colexec/colexechash/hashtable.go line 1001 at r2 (raw file):

Previously, DrewKimball (Drew Kimball) wrote…

Should we also be releasing the slices in the DatumAlloc struct?

[nit] It might be good to nil out the remaining slices in the HashTable, or alternatively add a comment explaining that they're small enough not to matter.

Yeah, good point, it's just better to just nil everything out.


pkg/sql/colexec/colexechash/hashtable.go line 1011 at r2 (raw file):

Previously, DrewKimball (Drew Kimball) wrote…

That happens when we set ht.ProbeScratch = hashTableProbeBuffer{} but I agree it's not obvious; a comment might be a good idea here.

Indeed, it was already being unset implicitly. Nil-ing out the whole hash table should be clearer.

This commit improves the memory accounting in the hash table to be more
precise in the case when the `distsql_workmem` limit is exhausted.
Previously, we would allocate large slices first only to perform the
memory accounting after the fact, possibly running out of budget which
would result in a error being thrown. We'd end up in a situation where
the hash table is still referencing larger newly-allocated slices while
only the previous memory usage is accounted for. This commit makes it so
that we account for the needed capacity upfront, then perform the
allocation, and then reconcile the accounting if necessary. This way
we're much more likely to encounter the budget error before making the
large allocations.

Additionally, this commit accounts for some internal slices in the hash
table used only in the hash joiner case.

Also, now both the hash aggregator and the hash joiner eagerly release
references to these internal slices of the hash table when the spilling
to disk occurs (we cannot do the same for the unordered distinct because
there the hash table is actually used after the spilling too).

This required a minor change to the way the unordered distinct spills to
disk. Previously, the memory error could only occur in two spots (and
one of those would leave the hash table in an inconsistent state and we
were "smart" in how we repaired that). However, now the memory error
could occur in more spots (and we could have several different
inconsistent states), so this commit chooses a slight performance
regression of simply rebuilding the hash table from scratch, once, when
the unordered distinct spills to disk.

Release note: None
@yuzefovich yuzefovich force-pushed the hashtable-accounting branch from cedd0df to 935090d Compare July 13, 2022 01:58
Copy link
Member Author

@yuzefovich yuzefovich left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I made minor changes to also release the memory under the append-only buffered batch, would appreciate another quick look.

Reviewable status: :shipit: complete! 0 of 0 LGTMs obtained (and 2 stale) (waiting on @DrewKimball and @michae2)

Copy link
Collaborator

@michae2 michae2 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:lgtm:

Reviewed 3 of 3 files at r3, all commit messages.
Reviewable status: :shipit: complete! 1 of 0 LGTMs obtained (and 1 stale) (waiting on @DrewKimball)

@yuzefovich
Copy link
Member Author

TFTRs!

bors r+

@craig
Copy link
Contributor

craig bot commented Jul 13, 2022

Build failed:

@yuzefovich
Copy link
Member Author

bors r+

@craig
Copy link
Contributor

craig bot commented Jul 13, 2022

Build succeeded:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants