-
Notifications
You must be signed in to change notification settings - Fork 3k
feat: support deleted rows in Comet #14062
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
anuragmantri
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM overall. I will do another pass after Comet 0.10.0 is released. Thanks @parthchandra
| && field.fieldId() != MetadataColumns.LAST_UPDATED_SEQUENCE_NUMBER.fieldId(); | ||
| } | ||
|
|
||
| private boolean supportsCometBatchReads(ScanTask task) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: I created this method to filter scans w/ deletions at the beginning. With your deletion support, we can remove this method and use the existing supportsParquetBatchReads b/c supportsCometBatchReads and supportsParquetBatchReads are identical now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am a bit confused with this method too. I think @hsiang-c is right: we should use supportsParquetBatchReads
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@huaxingao Sorry about that. I added this in a different draft PR to fallback to Spark when delete files are present. It is no longer needed with Parth's work.
| && field.fieldId() != MetadataColumns.LAST_UPDATED_SEQUENCE_NUMBER.fieldId(); | ||
| } | ||
|
|
||
| private boolean supportsCometBatchReads(ScanTask task) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ditto, can reuse supportsParquetBatchReads now.
|
@hsiang-c I will revise this once Comet 0.10.0 is released and your changes are merged. |
|
Looks good overall. I will also do another pass after Comet 0.10.0 is released. Thanks @parthchandra for working on this! |
|
Thanks @huaxingao. For reference, this is consumed on the Comet side here: apache/datafusion-comet@bb318a5#diff-00a84cd42c8ea9cf336eedc24279f079148064522404461b3d4c88d9b0e36259 |
33f65ae to
2e50d88
Compare
|
This pull request has been marked as stale due to 30 days of inactivity. It will be closed in 1 week if no further activity occurs. If you think that’s incorrect or this pull request requires a review, please simply write any comment. If closed, you can revive the PR at any time and @mention a reviewer or discuss it on the dev@iceberg.apache.org list. Thank you for your contributions. |
|
This pull request has been closed due to lack of activity. This is not a judgement on the merit of the PR in any way. It is just a way of keeping the PR queue manageable. If you think that is incorrect, or the pull request requires review, you can revive the PR at any time. |
This PR requires comet 0.10.0 and also #13786.
The PR removes the restriction on handling deleted rows in Comet. The basic change is to replace the use of
ColumnVectorWithFilterwithCometSelectionVectorwhich provides the same functionality.CometSelectionVectorrepresents the values vector and the row mapping id as arrow vectors allowing fast zero-copy transfer via Arrow FFI. All processing is done in Comet native code.