Skip to content

Don't dealloc strings if the TensorBuffer is shared (#357) #358

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Aug 6, 2021

Conversation

brychcy
Copy link
Contributor

@brychcy brychcy commented Jul 30, 2021

No description provided.

@saudet
Copy link
Contributor

saudet commented Aug 4, 2021

There's no more straightforward way to get the information? It doesn't sound like TF_TensorMaybeMove() is always going to return null when the content is being shared.

@brychcy
Copy link
Contributor Author

brychcy commented Aug 4, 2021

There's no more straightforward way to get the information? It doesn't sound like TF_TensorMaybeMove() is always going to return null when the content is being shared.

If there is, I haven't found it.

BTW, what I really wonder is: Shouldn't string deallocation be done automatically on the native Tensorflow level when the last tensor that uses the TensorBuffer is released?

@saudet
Copy link
Contributor

saudet commented Aug 5, 2021

No, that doesn't happen. We need to allocate the strings manually, and deallocate them manually or we end up with memory leaks, see issue #251.

karllessard
karllessard previously approved these changes Aug 5, 2021
Copy link
Collaborator

@karllessard karllessard left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Even if we are not 100% sure that this fix will cover all cases, it seemed to work for @brychcy so let's merge it to give it a try and see. Thanks @brychcy !

@karllessard
Copy link
Collaborator

Oops, it looks like the modified code is not formatted correctly, @brychcy can you please run mvn spotless:apply locally before pushing a newer version of this PR?

@brychcy
Copy link
Contributor Author

brychcy commented Aug 5, 2021

Oops, it looks like the modified code is not formatted correctly, @brychcy can you please run mvn spotless:apply locally before pushing a newer version of this PR?

Done.

At first it didn't do anything for me because origin/master pointed to my version and
<ratchetFrom>origin/master</ratchetFrom>
is used.

As the whole file was reformatted, I have done it in a separate commit.

Copy link
Collaborator

@karllessard karllessard left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot @brychcy for fixing this!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants