Skip to content

Conversation

@krithalith
Copy link
Contributor

@krithalith krithalith commented Dec 19, 2025

Replace grouped convolution bwd weight wmma v3 bilinear and scale bf16f32bf16 support with bf16bf16bf16 support. Update tests.

While updating this I found that it initially wasn't working for bilinear, and it turns out that one of the overloads for the elementwise bilinear op struct (bf16, float (accu), bf16) was incorrect. I fixed it and then everything ran fine. What surprises me though is that this didn't affect other convolution types which already had bilinear bf16^3 support. Also, it seems like there is no overload for (bf16, float, float), which presumably should have been necessary for the former bf16f32bf16 support.

Proposed changes

Please describe the motivation behind the pull request, whether it enables a new feature or fixes a bug. If there are associated pull requests or issues, please link them to the pull request.

Checklist

Please put an x into the boxes that apply. You can also fill these out after creating the PR. If you're not sure, please don't hesitate to ask.

  • I have added tests relevant to the introduced functionality, and the unit tests are passing locally
  • I have added the test to REGRESSION_TESTS list defined at the top of CMakeLists.txt in tests/CMakeLists.txt, IF the test takes more than 30 seconds to run.
  • I have added inline documentation which enables the maintainers with understanding the motivation
  • I have removed the stale documentation which is no longer relevant after this pull request
  • (If this change is user-facing) I have added release notes which provide the end users with a brief summary of the improvement from this pull request
  • I have run clang-format on all changed files
  • Any dependent changes have been merged

Discussion

If this is a relatively large or complex change, feel free to start a discussion by explaining why you chose the solution you did and what alternatives you considered

…6f32bf16 support with bf16bf16bf16 support. Update tests.
…linear elementwise overload for this case (bf16, f32 accu, bf16) was wrong.
@krithalith krithalith force-pushed the streamhpc/grouped_conv_bwd_wei_bf16bf16bf16 branch from 83fdc09 to eb918ef Compare December 23, 2025 10:28
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants