Skip to content

Wrong solution to L3-31_VisionAttention in the artifact repo #52

@mzweilin

Description

@mzweilin

Dear KernelAgent team,

Thanks for sharing your work with the community. After carefully examining some results from the artifact reposiroty, I found that the solution to 31_VisionAttention of KernelBench Level 3 might be wrong.

https://github.com/Laurawly/kernelagent-artifacts/blob/main/L3/31_VisionAttention/final_kernel.py

The solution does not implement the attention mechanism. It only implements LayerNorm, but it passes the "wrong test" anyway.

I have tried to generate a solution using the complex Fuser pipeline, but it also gave me a false success. There's no Triton-based attention in the solution, because the test allows the agent to use torch.bmm() and Tensor.matmul().

Would you release all artifacts for us to better understand the situation? Thanks.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions