Fix flipped (height,width) in batched inference notebook #246
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
(height,width) is flipped in
add_visual_promptandadd_text_promptinexamples/sam3_image_batched_inference.ipynb:sizeof aSAMImageish, w, notw, hInferenceMetadataalso expects(height, width)Since it's flipped twice, it doesn't affect behavior in the notebook, but
wmisleadingly stores the height. So eitherh, w = datapoint.images[0].size(current proposal),w, h = datapoint.images[0].data.size(which would access the PIL Image which isw,h)The other changed line 84 regarding torch.inference_mode has been concurrently addressed by #245.