-
Notifications
You must be signed in to change notification settings - Fork 299
Description
Iceberg-rust 0.3.0
The main objective of 0.3.0 is to have a working read path (non-exhaustive list :)
- Scan API Added by @liurenjie1024 in feat: Introduce basic file scan planning. #129
- Predicate pushdown into the Parquet reader Worked by @viirya in feat: Convert predicate to arrow filter and push down to parquet reader #295
- Parquet projection into Arrow streams Worked on by @viirya in feat: Read Parquet data file with projection #245, still some limitations, see PR
- Manifest pruning on using the
field_summary
: Skipping data on the highest level by pruning away manifests:- Transforms added by @marvinlanhenke in feat: Project transform #309
- ManifestEvaluator added by @sdd in Add
ManifestEvaluator
, used to filter manifests in table scans #322- Implement todo's some of the expressions need to be implemented. Issue in Implement all functions of BoundPredicateVisitor for ManifestFilterVisitor #350
- Tests port the test-suite from Python to Rust
- Filter in
TableScan
in flight by @sdd in Implement manifest filtering inTableScan
#323
- Skipping manifest-entries within a manifest based on the
102: partition
struct- Accessors added by @sdd in Add Struct Accessors to BoundReferences #317
- Projection added by @marvinlanhenke in feat: Project transform #309
- ExpressionEvaluator Implement the evaluator worked on by @marvinlanhenke: Implement
ExpressionEvaluator
#358 - Bind
partition-spec
schema to the102: partition
struct and evaluates it. - Filter in
TableScan
- Skip data-files using the metrics evaluator
- InclusiveMetricsEvaluator worked on by @sdd in Add
InclusiveMetricsEvaluator
#347 - InclusiveProjection added by @sdd in Add
ManifestEvaluator
, used to filter manifests in table scans #322 - Filter in
TableScan
- InclusiveMetricsEvaluator worked on by @sdd in Add
- Datafusion Integration with Apache Datafusion to add SQL support: Tracking Issue: Integration with Datafusion #357
- Initial groundwork in Basic Integration with Datafusion #324.
- Runtime
- Parallel loading Add runtime module to enable concurrent load of manifest files. #124
Blocking issues:
- Field-IDs related:
- Empty snapshot ID should be
Null
instead of-1
#352
Nice to have (related to the query plan optimizations above):
- Implement skipping based on sequence number skip
DELETE
manifests that contain unrelated delete files. - Add support for more fileio
(Tracking issues of aligning storage support with iceberg-java #408)
State of catalog integration:
- Catalog support
- REST Catalog: First stab by @liurenjie1024 in feat: First version of rest catalog. #78
- @Fokko will follow up with IT tests
- Glue Catalog Added by @marvinlanhenke in:
- SQL Catalog Worked on by @JanKaul in Sql catalog #229
- Hive Catalog Added by @Xuanwo.
- Do we want similar IT tests as in PyIceberg?
- REST Catalog: First stab by @liurenjie1024 in feat: First version of rest catalog. #78
For the release after that, I think the commit path is going to be important.
Iceberg-rust 0.4.0 and beyond
Nice to have for the 0.3.0 release, but not required. Of course, open for debate.
- Support for Positional Deletes Entails matching the deletes to the datafiles based on the statistics.
- Support for Equality Deletes Entails putting the delete files in the right order to apply them in the right sequence.
Commit path
The commit path entails writing a new metadata JSON.
- Applying updates to the metadata Updating the metadata is important both for writing a new version of the JSON in case of a non-REST catalog, but also to keep an up-to-date version in memory. It is very much recommended to re-use the Updates/Requirement objects provided by the REST catalog protocol.
- REST Catalog serialize the updates and requirements into JSON which is dispatched to the REST catalog.
- Other catalogs For the other catalogs, instead of dispatching the updates/requirements to the catalog. There are additional steps:
- Logic to validate the requirements against the metadata, to detect commit conflicts.
- Writing a new version of the metadata.json.
- Provide locking mechanisms within the commit (Glue, Hive, SQL, ..)
- Update table properties Sets properties on the table. Probably the best to start with since it doesn't require a complicated API.
- Schema evolution API to update the schema, and produce new metadata.
- Partition spec evolution API to update the partition spec, and produce new metadata.
- Sort order evolution API to update the schema, and produce new metadata.
Metadata tables
Metadata tables are used to inspect the table. Having these tables also allows easy implementation of the maintenance procedures since you can easily list all the snapshots, and expire the ones that are older than a certain threshold.
Write support
Most of the work in write support is around generating the correct Iceberg metadata. Some decisions can be made, for example first supporting only FastAppends, and only V2 metadata.
It is common to have multiple snapshots in a single commit to the catalog. For example, an overwrite operation of a partition can be a delete + append operation. This makes the implementation easier since you can separate the problems, and tackle them one by one. Also, for the roadmap it makes it easier since their operations can be developed in parallel.
- Commit semantics
- MergeAppend appends new manifest list entries to existing manifest files. Reduces the amount of metadata produced, but takes some more time to commit since existing metadata has to be rewritten, and retries are also more costly.
- FastAppend Generates a new manifest per commit, which allows fast commits, but generates more metadata in the long run. PR by @ZENOTME in feat: support append data file and add e2e test #349
- Snapshot generation manipulation of data within a table is done by appending snapshots to the metadata JSON.
- APPEND Only data files were added and no files were removed.
- REPLACE Data and delete files were added and removed without changing table data; i.e., compaction, changing the data file format, or relocating data files.
- OVERWRITE Data and delete files were added and removed in a logical overwrite operation.
- DELETE Data files were removed and their contents logically deleted and/or delete files were added to delete rows.
- Add files to add existing Parquet files to a table. Issue in Support to append file on table #345
- Name mapping in case the files don't have field-IDs set.
- [Summary generations] Part of the snapshot that indicates what's in the snapshot.
- Metrics collection There are two situations:
- Collect metrics when writing This is done with the Java API where during writing the upper, lower bound are tracked and the number of null- and nan records are counted.
- Collect metrics from footer When an existing file is added, the footer of the Parquet file is opened to reconstruct all the metrics needed for Iceberg.
- Deletes This mainly relies on strict projection to check if the data files cannot match with the predicate.
- Strict projection needs to be added to the transforms.
- Strict Metrics Evaluator to determine if the predicate cannot match.
Future topics
- Python bindings
- WASM to run Iceberg-rust in the browser
Contribute
If you want to contribute to the upcoming milestone, feel free to comment on this issue. If there is anything unclear or missing, feel free to reach out here as well 👍