Skip to content

Conversation

@mdekstrand
Copy link
Member

This breaks item scoring in LightGCN into batches to reduce the inference-time memory requirements due to copied tensors.

@codecov
Copy link

codecov bot commented Jan 21, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 88.96%. Comparing base (61dc1b4) to head (a2d3084).

Additional details and impacted files
@@           Coverage Diff           @@
##             main     #982   +/-   ##
=======================================
  Coverage   88.96%   88.96%           
=======================================
  Files         218      218           
  Lines       15089    15095    +6     
=======================================
+ Hits        13424    13430    +6     
  Misses       1665     1665           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@mdekstrand mdekstrand force-pushed the feature/gcn-batch-infer branch from 1402085 to a2d3084 Compare January 21, 2026 18:40
@mdekstrand
Copy link
Member Author

This does not seem to reduce memory usage, and significantly increases recommendation time.

@mdekstrand mdekstrand closed this Jan 21, 2026
@mdekstrand mdekstrand deleted the feature/gcn-batch-infer branch January 21, 2026 20:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant