Skip to content

Conversation

@zhangbilang
Copy link
Collaborator

No description provided.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @SmudgedWings, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request primarily focuses on adapting the VisionZip token reduction method to be fully compatible with the LLaVA-Next model. It introduces new configuration parameters and corresponding logic to enable more flexible token pruning and merging strategies, while also addressing minor code quality improvements and model-specific path corrections.

Highlights

  • LLaVA-Next Compatibility: The VisionZip token reduction method has been updated to ensure compatibility with the LLaVA-Next model. This includes specific handling for its input preparation and token length, such as dynamically replacing the prepare_inputs_labels_for_multimodal method.
  • New VisionZip Operational Modes: Introduced prune_only and merge_only operational modes for VisionZip. These new flags, configurable in visionzip.yml, provide more flexible strategies for token reduction, allowing for scenarios where only pruning or only merging of tokens occurs.
  • Configuration and Logic Adjustments: The dominant token count in the visionzip.yml configuration has been adjusted from 191 to 162, reflecting changes for LLaVA-Next. Corresponding logic in visionzip.py has been updated to process tokens based on these new modes and to correctly manage index_masks for selective token retention.
  • Code Cleanup and Refinements: Minor code cleanup has been performed, including the removal of debugging breakpoints (pdb.set_trace()) from utils.py. Additionally, a minor error handling inconsistency was fixed, and the access path for the vision tower in the Llava model class was corrected.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request updates VisionZip to support llava-next models. The changes include updating the configuration, adding logic to handle prune_only and merge_only modes, and adapting the model interaction for llava-next through method monkey-patching and new hooks. A critical bug where NotImplementedError was not being raised is also fixed. I've suggested a small optimization to avoid an unnecessary computation. Overall, the changes look good and are consistent with the existing codebase.

raise NotImplementedError
elif 'unpad' in mm_patch_merge_type and 'anyres_max' in image_aspect_ratio:
NotImplementedError
raise NotImplementedError

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

This is a critical bug fix. The statement NotImplementedError by itself is a no-op and does not raise an exception. By adding raise, you ensure that an exception is thrown, preventing the program from continuing with an unsupported configuration, which could lead to silent failures or incorrect behavior.

Comment on lines 313 to +329
topk_indices = cls_attention_sum.topk(dominant_num, dim=1).indices + 1
all_indices = torch.cat(
[
torch.zeros(
(hidden_states.shape[0], 1),
dtype=topk_indices.dtype,
device=topk_indices.device,
),
topk_indices,
],
dim=1,
)
if pruning_paras['merge_only']:
all_indices = torch.zeros(
(hidden_states.shape[0], 1),
dtype=topk_indices.dtype, device=topk_indices.device
)
dominant_num = 0
else:
all_indices = torch.cat(
[
torch.zeros(
(hidden_states.shape[0], 1),
dtype=topk_indices.dtype, device=topk_indices.device,
),
topk_indices,
], dim=1,
)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The topk operation can be computationally expensive. In the current implementation, it's called on line 313 regardless of whether pruning_paras['merge_only'] is true. When merge_only is true, the result of topk is not fully utilized (only its dtype and device attributes are used), leading to unnecessary computation.

To improve efficiency, consider moving the topk call into the else block, so it's only executed when its result is actually needed.

Suggested change
topk_indices = cls_attention_sum.topk(dominant_num, dim=1).indices + 1
all_indices = torch.cat(
[
torch.zeros(
(hidden_states.shape[0], 1),
dtype=topk_indices.dtype,
device=topk_indices.device,
),
topk_indices,
],
dim=1,
)
if pruning_paras['merge_only']:
all_indices = torch.zeros(
(hidden_states.shape[0], 1),
dtype=topk_indices.dtype, device=topk_indices.device
)
dominant_num = 0
else:
all_indices = torch.cat(
[
torch.zeros(
(hidden_states.shape[0], 1),
dtype=topk_indices.dtype, device=topk_indices.device,
),
topk_indices,
], dim=1,
)
if pruning_paras['merge_only']:
all_indices = torch.zeros(
(hidden_states.shape[0], 1),
dtype=torch.long, device=hidden_states.device
)
dominant_num = 0
else:
topk_indices = cls_attention_sum.topk(dominant_num, dim=1).indices + 1
all_indices = torch.cat(
[
torch.zeros(
(hidden_states.shape[0], 1),
dtype=topk_indices.dtype, device=topk_indices.device,
),
topk_indices,
], dim=1,
)

@llmc-reviewer llmc-reviewer merged commit 73c131d into ModelTC:main Jul 28, 2025
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants