You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
lstm_output : [batch_size, n_step, n_hidden * num_directions(=2)], F matrix
def attention_net(self, lstm_output, final_state):
batch_size = len(lstm_output)
hidden_forward=final_state[0]
hidden_backward=final_state[1]
hidden_f_b=torch.cat((hidden_forward, hidden_backward), 1)
hidden = hidden_f_b.view(batch_size, -1, 1) #
hidden = final_state.view(batch_size, -1, 1) # this line in source code is wrong, bi-lstm's hidden is[2,batch,embed_size] ,we need to concatenate forward and backward hidden state. if we final_state.view(batch_size, -1, 1) the hidden state is not concatenate by final_state[0][0] and final_state[1][0]
The text was updated successfully, but these errors were encountered:
lstm_output : [batch_size, n_step, n_hidden * num_directions(=2)], F matrix
The text was updated successfully, but these errors were encountered: