feat: adaptive keyword boost with Convex Combination fusion #17
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Following up on #13, I ran more experiments to find an alternative to RRF+MMR.
Background
RRF fusion on LoCoMo: -7.9% worse than baseline. Makes sense - LoCoMo queries are conversational ("When did X happen?", "What did Y say?"). No exact terms to match.
Keyword search helps with different stuff: function names (
parseJWT), error codes (CVE-2017-3156), versions (Oracle 12c). So I tried an adaptive approach - let the LLM decide when to use it.What I tested
Fusion method: Convex Combination vs RRF
CC keeps score magnitude, RRF only uses ranks. CC worked better for this.
Alpha values (LoCoMo, 60 questions):
Adaptive alpha (varying α by keyword_importance): performed worse than fixed 0.7. Even for technical queries, aggressive keyword weighting hurts. Planning already captures the terms semantically.
Results
System correctly skips boost for conversational queries, applies it for technical ones.
Limitations
References