Skip to content

Conversation

@glmanhtu
Copy link

@glmanhtu glmanhtu commented Dec 7, 2020

Refer to issue #39

Since the packed_RNN_out is padded by zeros to make sure all sequences have the same length, therefore, we can't take the last output of RNN_out to decode as output of the network. Instead, we use the secondary output of torch.nn.utils.rnn.pad_packed_sequence function to determine which element is the last of the sequence to decode
…) when GPU is not available. This will not affect the training flow on GPU machines
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant