Skip to content

Conversation

@parthchandra
Copy link

This PR requires comet 0.10.0 and also #13786.
The PR removes the restriction on handling deleted rows in Comet. The basic change is to replace the use of ColumnVectorWithFilter with CometSelectionVector which provides the same functionality. CometSelectionVector represents the values vector and the row mapping id as arrow vectors allowing fast zero-copy transfer via Arrow FFI. All processing is done in Comet native code.

@github-actions github-actions bot added the spark label Sep 12, 2025
@parthchandra parthchandra marked this pull request as draft September 12, 2025 16:25
@parthchandra
Copy link
Author

Copy link
Contributor

@anuragmantri anuragmantri left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM overall. I will do another pass after Comet 0.10.0 is released. Thanks @parthchandra

&& field.fieldId() != MetadataColumns.LAST_UPDATED_SEQUENCE_NUMBER.fieldId();
}

private boolean supportsCometBatchReads(ScanTask task) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: I created this method to filter scans w/ deletions at the beginning. With your deletion support, we can remove this method and use the existing supportsParquetBatchReads b/c supportsCometBatchReads and supportsParquetBatchReads are identical now.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am a bit confused with this method too. I think @hsiang-c is right: we should use supportsParquetBatchReads

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@huaxingao Sorry about that. I added this in a different draft PR to fallback to Spark when delete files are present. It is no longer needed with Parth's work.

&& field.fieldId() != MetadataColumns.LAST_UPDATED_SEQUENCE_NUMBER.fieldId();
}

private boolean supportsCometBatchReads(ScanTask task) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ditto, can reuse supportsParquetBatchReads now.

@parthchandra
Copy link
Author

@hsiang-c I will revise this once Comet 0.10.0 is released and your changes are merged.

@huaxingao
Copy link
Contributor

Looks good overall. I will also do another pass after Comet 0.10.0 is released. Thanks @parthchandra for working on this!

@parthchandra
Copy link
Author

Thanks @huaxingao. For reference, this is consumed on the Comet side here: apache/datafusion-comet@bb318a5#diff-00a84cd42c8ea9cf336eedc24279f079148064522404461b3d4c88d9b0e36259

@github-actions
Copy link

github-actions bot commented Nov 8, 2025

This pull request has been marked as stale due to 30 days of inactivity. It will be closed in 1 week if no further activity occurs. If you think that’s incorrect or this pull request requires a review, please simply write any comment. If closed, you can revive the PR at any time and @mention a reviewer or discuss it on the dev@iceberg.apache.org list. Thank you for your contributions.

@github-actions github-actions bot added the stale label Nov 8, 2025
@github-actions
Copy link

This pull request has been closed due to lack of activity. This is not a judgement on the merit of the PR in any way. It is just a way of keeping the PR queue manageable. If you think that is incorrect, or the pull request requires review, you can revive the PR at any time.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants