-
Notifications
You must be signed in to change notification settings - Fork 11
bulk delete #34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
bulk delete #34
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR enhances the bulk delete functionality to use Dataverse's asynchronous BulkDelete API instead of sequential individual deletes. The multi-record delete now returns a job ID and supports optional blocking with configurable timeout and polling.
- Replaced sequential delete loop with BulkDelete API call using QueryExpression
- Added async job polling with
_wait_for_async_joband state interpretation logic - Updated public API to return job ID for multi-record deletes and support wait parameters
Reviewed Changes
Copilot reviewed 4 out of 4 changed files in this pull request and generated 6 comments.
| File | Description |
|---|---|
| src/dataverse_sdk/odata.py | Reimplemented _delete_multiple to use BulkDelete API; added _wait_for_async_job and _interpret_async_job_state helper methods; imported Tuple, datetime, and timezone |
| src/dataverse_sdk/client.py | Updated delete method signature to accept wait parameters and return optional job ID for bulk deletes |
| examples/quickstart.py | Removed concurrent deletion approach; demonstrated fire-and-forget vs wait-for-completion bulk delete patterns; added retry logic for column deletion |
| README.md | Updated documentation to reflect new bulk delete behavior, return type, and wait functionality; removed outdated limitation about DeleteMultiple |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
Copilot reviewed 4 out of 4 changed files in this pull request and generated 4 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
Using BulkDelete for delete multiple scenario per recommendation in doc https://learn.microsoft.com/en-us/power-apps/developer/data-platform/webapi/update-delete-entities-using-web-api#delete-multiple-records-in-a-single-request

Manual perf test numbers:
Delete 25 records: sequential (call single-record delete in a loop) takes ~20s while BulkDelete takes ~50s (without wait BulkDelete returns in <1s)
Delete 500 records: sequential takes 500s while BulkDelete takes 150s
Copilot generated summary:
This pull request introduces significant improvements to the Dataverse SDK's delete functionality, adding support for asynchronous bulk deletes with job tracking and optional waiting for completion. The documentation and examples are updated to reflect these changes, and the codebase is refactored to remove outdated concurrent deletion logic and improve retry handling.
Delete API enhancements:
deletemethod inclient.pynow supports bulk deletes via an async BulkDelete job, returning the job ID for multi-record deletes and allowing optional waiting for job completion.odata.pyimplementation adds the_delete_multiplemethod, which submits a BulkDelete job and optionally waits for completion, including robust polling and state interpretation logic.Documentation updates:
README.mdis updated to document the new bulk delete behavior, including job ID return values, async/wait options, and updated guidelines. [1] [2] [3] [4] [5]Example and demo improvements:
examples/quickstart.pydemo now shows single and bulk deletes, including fire-and-forget and wait-for-completion modes, and removes the old concurrent deletion logic.Code cleanup:
Internal API improvements:
odata.pyis robust, with clear state/status interpretation and error handling for timeouts and failures. [1] [2]