Original Paper's Code Released #168
Replies: 2 comments
-
Hi @xanderdunn, |
Beta Was this translation helpful? Give feedback.
-
I've just closed issue #170 with this comment: I've just compared TST and TST implementations in tsai with the original code published by the authors (https://github.com/gzerveas/mvts_transformer). I've updated the models based on a few findings that were not described in the arxiv paper: learnable positional encoding is initialized using: nn.init.uniform_(W_pos, -0.02, 0.02) The code is already update in github. Thanks @xanderdunn for informing us that the original code was already available! Based on this, I'll close this issue. |
Beta Was this translation helpful? Give feedback.
-
The author of the original paper that the transformer MVP model was based on has release his code: https://github.com/gzerveas/mvts_transformer
It may be useful for testing differences in results compared to what the paper is reporting on the benchmark datasets.
Beta Was this translation helpful? Give feedback.
All reactions