Skip to content

Switch default string storage from python to pyarrow (if installed) also for NA-variant of the StringDtype #62118

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

jorisvandenbossche
Copy link
Member

Closes #60287

(need to add whatsnew)

@jorisvandenbossche jorisvandenbossche added this to the 3.0 milestone Aug 15, 2025
@jorisvandenbossche jorisvandenbossche added Strings String extension data type and string data NA - MaskedArrays Related to pd.NA and nullable extension arrays labels Aug 15, 2025
Comment on lines +1457 to +1463
if dtype_backend == "pyarrow":
# using the StringDtype below would use large_string by default
# keep here to pyarrow's default of string
import pyarrow as pa
dtype = ArrowDtype(pa.string())
else:
dtype = StringDtype()
Copy link
Member Author

@jorisvandenbossche jorisvandenbossche Aug 15, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is one behaviour question we should decide on. Currently, pd.read_csv(..., dtype_backend="pyarrrow") gives you the ArrowDtype using pa.string() (string[pyarrow]) in general, but without the change above, with this PR this now gives you pa.large_string() (large_string[pyarrow]).

The reason for this is because this implementation first goes through creating a column with StringDtype, and then in a next step below converts those to the equivalent ArrowExtensionArray. Before, with "python" storage string dtype, pyarrow converts that to pa.string(), but now with using "pyarrow" storage by default for the StringDtype (which uses pa.large_string()), the pyarrow conversion gives large_string.

But we could also not special case this here in the code, and just update the tests to expect large_string for the c parser

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would support this returning pa.large_string instead of pa.string to align with Patrick's changes in 2.2 #56220.

And generally ArrowExtensionArray._from_sequence should probably return pa.large_string instead of pa.string for a sequence of strings (if it doesn't already)

Copy link
Member

@WillAyd WillAyd Aug 15, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not a blocker but I would prefer returning whatever pyarrow provides and not forcing this to a particular type with dtype_backend="pyarrow".

Sure, many users of pandas could probably care less about the differences between string / large_string, and there likely isn't too much over head upcasting the format to the latter in pandas. However, it gets murky when you start thinking about string_view and the larger I/O system. I don't think the pyarrow backend should force string_view to large_string, because that can have non-trivial performance impacts, and if we follow that train of thought it would be inconsistent to cast string to large_string

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm totally ignorant on the differences between string/large_string/string_view/whatever. I lean toward "always give a pd.StringDtype"

Copy link
Member Author

@jorisvandenbossche jorisvandenbossche Aug 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I lean toward "always give a pd.StringDtype"

To be clear, this is not about StringDtype .. but about ArrowDtype. If the user asks for dtype_backend="pyarrow", we currently give you a dataframe with all ArrowDtype columns. Unless that is what you would change? (specifically for strings, use StringDtype backed by pyarrow instead of ArrowDtype(string))

EDIT: I see the link to #62129 now, so yes that is what you meant ;)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And generally ArrowExtensionArray._from_sequence should probably return pa.large_string instead of pa.string for a sequence of strings (if it doesn't already)

It does not, because there it just relies on the type inference of pyarrow, and pyarrow will always prefer pa.string() over pa.large_string() when inferring (unless the data source has specific data type information).

And I think I agree with Will's comment about preferring to rely on what pyarrow gives, and not forcing specific types, specifically for ArrowDtype (but no strong opinion here)

The reason we get large_string here is because for StringDtype, we do make a very specific choice on our side to use large_string instead of string. And at that point of course pyarrow preserves that choice when converting to a pyarrow array.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
NA - MaskedArrays Related to pd.NA and nullable extension arrays Strings String extension data type and string data
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Change default string storage from "python" to "pyarrow" (if installed) for for NA-variant of StringDtype
4 participants