Skip to content

no attention backend support for softmax = learnable and cp_comm_type = p2p #2541

@jordane95

Description

@jordane95

When using context parallel to fine-tune gpt-oss, no attention backend is supported for this configuration. I have to change cp_comm_type to a2a to enable FusedAttention. But this is potentially less efficient than a2a at large context length (say 128k).

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions