Description
Is your feature request related to a problem or challenge?
My goal is that I will have a fully sorted file sorted by primary key where each fileRowNumber is the index of that row in the file.
I am not sure what @daphnenhuch-at 's use case is, but getting row numbers from a file is used for several use cases I know of:
- Implementing delete vectors (aka filtering out row by row_id has been deleted from a file)
- Implementing external indexes (e.g. having a full text index that tells you document 10001, and 10003 match and then wanting to fetch (only) those rows from the file)
Today there are ways to compute this, but they are inefficient (for example, the workaround below will read all rows from the file, so if you are trying to select only one based on row number a huge amount of work is wasted)
Today you can kind of get this information, by
- disable repartitioning by setting datafusion.execution.target_partitions config setting to
1
. This is important to disable repartitioning otherwise large tables will be scanned in parallel and data from multiple parallel chunks will be interleaved
Running a query for each file using the row_number
window function. Something like:
ctx
.read_parquet("file1.parquet")
.await?
.window(vec![row_number().alias(DATA_FUSION_ROW_NUMBER)])
In SQL
> set datafusion.execution.target_partitions = 1;
0 row(s) fetched.
Elapsed 0.001 seconds.
> select "VendorID", row_number() OVER () from 'yellow_tripdata_2025-01.parquet' limit 10;
+----------+-----------------------------------------------------------------------+
| VendorID | row_number() ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING |
+----------+-----------------------------------------------------------------------+
| 1 | 1 |
| 1 | 2 |
| 1 | 3 |
| 2 | 4 |
| 2 | 5 |
| 2 | 6 |
| 1 | 7 |
| 1 | 8 |
| 1 | 9 |
| 2 | 10 |
+----------+-----------------------------------------------------------------------+
10 row(s) fetched.
Elapsed 0.005 seconds.
Describe the solution you'd like
I would like to consider a nicer way to get the row number from the file and then write queries against it.
Something like
select * from my_table where row_number IN (10002, 10003)
Which would return the 10,002 and 10,003 row in the file respectively. The idea is that then we could:
- Do predicate pushdown on those row numbers
- Figure out how to still scan the file in parallel
Describe alternatives you've considered
I think we would need to add some sort of special column (similar to partitioning columns) to the listing table provider
Another alternative would be to keep this kind of functionality out of the core and implement it in external table providers
Additional context
- Related: Support metadata columns (
location
,size
,last_modified
) inListingTableProvider
#15173 - Add support for file row numbers in Parquet readers arrow-rs#7307 (adding support for row numbers in a parquet reader)