-
Notifications
You must be signed in to change notification settings - Fork 7
Open
Description
It's not clear to me how the Concat strategy from Table 2 works in practice. Based on the text in the paper I thought that shrinking each amino acid representation to 4 and concatenating them happens after the main model has been trained. But the only reference I can find to "concat" in the code is in tape/tape/models/modeling_lstm.py (and same for bert, autoencoder and resnet). But the temporal_pooling variable that controls the pooling behavior (max, mean, concat etc.) does not appear to be used anywhere outside of those four scripts.
So, long story short, how do I reproduce the Concat behavior for a pre-trained model?
taylormjs, yww2316, NEO722315 and wdnmd80
Metadata
Metadata
Assignees
Labels
No labels