Skip to content

Conversation

@xingliu14
Copy link
Collaborator

Description

This PR adds the infrastructure skeleton for FP8 quantization support on TPU, following the established pattern from AWQ quantization.

Changes

  • Added FP8 constant to quant_methods.py
  • Created VllmFp8Config extending Fp8Config and JaxCommonConfig
  • Created VllmFp8LinearMethod extending Fp8LinearMethod with skeleton methods
  • Registered FP8 in quantization method registry
  • Added placeholder test file
  • Fixed typo in awq.py comments

Addressing #1121.

Tests

Please describe how you tested this change, and include any instructions and/or
commands to reproduce.

Checklist

Before submitting this PR, please make sure:

  • I have performed a self-review of my code.
  • I have necessary comments in my code, particularly in hard-to-understand areas.
  • I have made or will make corresponding changes to any relevant documentation.

Signed-off-by: Xing Liu <xingliu14@gmail.com>
Signed-off-by: Xing Liu <xingliu14@gmail.com>
@xingliu14 xingliu14 changed the title [WIP] [Torchax] fp8 quantization skeleton [Torchax] fp8 quantization skeleton Dec 14, 2025
@xingliu14
Copy link
Collaborator Author

@kyuyeunk PTAL.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant