Skip to content

Arrow: Infer the types when reading #1669

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 11 commits into from
Mar 26, 2025
Merged

Arrow: Infer the types when reading #1669

merged 11 commits into from
Mar 26, 2025

Conversation

Fokko
Copy link
Contributor

@Fokko Fokko commented Feb 16, 2025

Rationale for this change

Time to give this another go 😆

When reading a Parquet file using PyArrow, there is some metadata stored in the Parquet file to either make it a large type (eg large_string, or a normal type (string). The difference is that the large types use a 64 bit offset to encode their arrays. This is not always needed, and we can could first check all the in the types of which it is stored, and let PyArrow decide here:

result = pa.concat_tables(tables, promote_options="permissive")

In PyArrow today we just bump everything to a large type, which might lead to additional memory consumption because it allocates an int64 array to allocate the offsets, instead of an int32.

I thought we would be good to go for this now with the new lower bound of PyArrow to 17. But, it looks like we still have to wait for Arrow 18 to fix the issue with the date types:

apache/arrow#43183

Fixes: #1049

Are these changes tested?

Yes, existing tests :)

Are there any user-facing changes?

Before, PyIceberg would always return the large Arrow types (eg, large_string instead of string). After this change, it will return the type it was written with.

When reading a Parquet file using PyArrow, there is some metadata
stored in the Parquet file to either make it a large type (eg
`large_string`, or a normal type (`string`). The difference is that
the large types use a 64 bit offset to encode their arrays.
This is not always needed, and we can could first check all the
in the types of which it is stored, and let PyArrow decide here:

https://github.com/apache/iceberg-python/blob/300b8405a0fe7d0111321e5644d704026af9266b/pyiceberg/io/pyarrow.py#L1579

In PyArrow today we just bump everything to a large type, which
might lead to additional memory consumption because it allocates
a int64 array to allocate the offsets, instead of an int32.

I thought we would be good to go for this now with the new lower
bound of PyArrow to 17. But, it looks like we still have to wait
for Arrow 18 to fix the issue with the `date` types:

apache/arrow#43183

Fixes: apache#1049
@@ -1750,7 +1750,7 @@ def to_arrow_batch_reader(self) -> pa.RecordBatchReader:
return pa.RecordBatchReader.from_batches(
target_schema,
batches,
)
).cast(target_schema)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will still return large types if you stream the batches because we don't want to fetch all the schemas upfront.

Comment on lines 1571 to 1572
if property_as_bool(self._io.properties, PYARROW_USE_LARGE_TYPES_ON_READ, False):
result = result.cast(arrow_schema)
Copy link
Contributor Author

@Fokko Fokko Feb 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I left this in, but I would be leaning toward deprecating this, since I don't think we want to trouble the user. I think it should be an implementation detail based on how large the buffers are.

@Fokko
Copy link
Contributor Author

Fokko commented Feb 18, 2025

@sungwy Thoughts? :D

Copy link
Collaborator

@sungwy sungwy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @Fokko - thank you for pinging me for review! The change looks good to me, but I have a reservation about introducing this change without a deprecation warning.

Firstly, without the PyIceberg code base having a properly defined list of public classes, we assume all our classes to be public facing unless they start with an underscore. I'd argue that removing an input parameter to the ArrowProjectionVisitor __init__ method is an API change.

Secondly, changing the default value of PYARROW_USE_LARGE_TYPES_ON_READ to True for to_table method also seems like a breaking change for users reading Iceberg tables through PyIceberg. their large_string columns will change to a string column on upgrade without a warning.

Would it make sense to introduce this change in two stages:

  1. First by introducing a new config variable like: PYICEBERG_INFER_LARGE_TYPES_ON_READ and set it to False on default, and raise a deprecation warning when the flag is set to False?
  2. Then remove PYICEBERG_INFER_LARGE_TYPES_ON_READ and PYARROW_USE_LARGE_TYPES_ON_READ in the next major version?

Copy link
Collaborator

@sungwy sungwy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@Fokko Fokko mentioned this pull request Mar 8, 2025
Copy link
Contributor

@kevinjqliu kevinjqliu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

But, it looks like we still have to wait for Arrow 18 to fix the issue with the date types:

should we first bump min version to Arrow 18?

Comment on lines 1393 to 1395
# https://github.com/apache/arrow/issues/41884
# https://github.com/apache/arrow/issues/43183
# Would be good to remove this later on
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we remove this comment? seems related to the schema type inference
1b9b884

pa.field("binary", pa.large_binary()),
pa.field("list", pa.large_list(pa.large_string())),
pa.field("binary", pa.binary()),
pa.field("list", pa.list_(pa.string())),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: to show complex type also stays the same

Suggested change
pa.field("list", pa.list_(pa.string())),
pa.field("list", pa.list_(pa.large_string())),

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good one!

@Fokko Fokko added the changelog Indicates that the PR introduces changes that require an entry in the changelog. label Mar 12, 2025
kevinjqliu pushed a commit that referenced this pull request Mar 15, 2025
While @kevinjqliu did an amazing job summarizing the new stuff in 0.9.0
in the GitHub release
(https://github.com/apache/iceberg-python/releases/tag/pyiceberg-0.9.0),
I think it would be good to formalize this a bit.

This also came up in #1669
where we introduced a behavioral change. cc @sungwy

I think it would be good to allow users to populate the changelog
section to ensure they know about any relevant changes.

The template is pretty minimal now to avoid being a big barrier to
opening a PR.
@Fokko
Copy link
Contributor Author

Fokko commented Mar 26, 2025

should we first bump min version to Arrow 18?

If you don't use date types, then everything works fine :) I'm a bit hesitant to bump it very aggressively, see #1822.

@Fokko Fokko merged commit 7a56ddb into apache:main Mar 26, 2025
7 checks passed
@@ -906,7 +906,7 @@ def test_table_scan_override_with_small_types(catalog: Catalog) -> None:
expected_schema = pa.schema(
[
pa.field("string", pa.string()),
pa.field("string-to-binary", pa.binary()),
pa.field("string-to-binary", pa.large_binary()),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Fokko is this right? type promotion for string->binary results in a large_binary type

def visit_binary(self, _: BinaryType) -> pa.DataType:
return pa.large_binary()

i found these 3 places where _ConvertToArrowSchema converts to large type by default

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
changelog Indicates that the PR introduces changes that require an entry in the changelog.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[feat] push down schema casting to the record batch level
3 participants