From 27aa699b10348d169fde30f2dcdc94bacf968191 Mon Sep 17 00:00:00 2001 From: chengtao-lv <897674362@qq.com> Date: Mon, 10 Nov 2025 14:25:26 +0800 Subject: [PATCH 1/2] Update README Added news about LLMC+ acceptance at AAAI 2026. --- README.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/README.md b/README.md index 16a47d5a..8fe31672 100644 --- a/README.md +++ b/README.md @@ -36,6 +36,8 @@ docker pull registry.cn-hangzhou.aliyuncs.com/yongyang/llmcompression:pure-lates ## :fire: Latest News +- **Nov 9, 2025:** 🍺🍺🍺 Our work [**LLMC+: Benchmarking Vision-Language Model Compression with a Plug-and-play Toolkit**](https://arxiv.org/abs/2508.09981) has been accpeted by AAAI 2026. + - **August 13, 2025:** πŸš€ We have open-sourced our compression solution for **vision-language models (VLMs)**, supporting over a total of **20 algorithms** that cover both **token reduction** and **quantization**. This release enables flexible, plug-and-play compression strategies for a wide range of multimodal tasks. please refer to the [documentation](https://llmc-en.readthedocs.io/en/latest/advanced/token_reduction.html). - **May 12, 2025:** πŸ”₯ We now fully support quantization for the **`Wan2.1`** series of video generation models and provide export of truly quantized **INT8/FP8** weights, compatible with the [lightx2v](https://github.com/ModelTC/lightx2v) inference framework. For details, please refer to the [lightx2v documentation](https://llmc-en.readthedocs.io/en/latest/backend/lightx2v.html). From 9b1a2a295aacb3b1c01a32e7fefd58d17d14dbb3 Mon Sep 17 00:00:00 2001 From: chengtao-lv <897674362@qq.com> Date: Mon, 10 Nov 2025 14:31:00 +0800 Subject: [PATCH 2/2] Fix typo in latest news section of README --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 8fe31672..ae17adb0 100644 --- a/README.md +++ b/README.md @@ -36,7 +36,7 @@ docker pull registry.cn-hangzhou.aliyuncs.com/yongyang/llmcompression:pure-lates ## :fire: Latest News -- **Nov 9, 2025:** 🍺🍺🍺 Our work [**LLMC+: Benchmarking Vision-Language Model Compression with a Plug-and-play Toolkit**](https://arxiv.org/abs/2508.09981) has been accpeted by AAAI 2026. +- **Nov 9, 2025:** 🍺🍺🍺 Our work [**LLMC+: Benchmarking Vision-Language Model Compression with a Plug-and-play Toolkit**](https://arxiv.org/abs/2508.09981) has been accepted by AAAI 2026. - **August 13, 2025:** πŸš€ We have open-sourced our compression solution for **vision-language models (VLMs)**, supporting over a total of **20 algorithms** that cover both **token reduction** and **quantization**. This release enables flexible, plug-and-play compression strategies for a wide range of multimodal tasks. please refer to the [documentation](https://llmc-en.readthedocs.io/en/latest/advanced/token_reduction.html).