SC'25 UltraAttn: Efficiently Parallelizing Attention through Hierarchical Context-Tiling
-
Updated
Aug 14, 2025 - Python
SC'25 UltraAttn: Efficiently Parallelizing Attention through Hierarchical Context-Tiling
a high-performance Block Sparse Attention kernel in Triton
Add a description, image, and links to the block-sparse-attention topic page so that developers can more easily learn about it.
To associate your repository with the block-sparse-attention topic, visit your repo's landing page and select "manage topics."