You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Just want to discuss the memory reduction problem. It seems that the TVM implementation does not store fewer matrices (like Queries, Keys, and Values matrix). The num of Q-K pairs is less than the full attention so that we can get a faster calculation speed, but why the memory reduction has a similar trend with the time reduction? Seems the TVM kernel does not use any technique to save the memory, and the padding 0 values are also int32, but the fact is that TVM implementation is memory efficient...
Looking forward to your reply.
The text was updated successfully, but these errors were encountered:
In fact, the number of Q-K pairs not only corresponds to the computational complexity, but also contributes to the memory consumption. The memory occupied by the attention scores matrix $S$ (S=Q@K, where @ is the matrix multiplication) grows quadratically with the sequence length $L$. Therefore, reducing the number of attention scores that need to be stored leads to reduced memory consumption. The TVM implementation stores a maximum of $A+C+1$ attention scores per query, thus reducing memory consumption.
Thanks for your excellent work!
Just want to discuss the memory reduction problem. It seems that the TVM implementation does not store fewer matrices (like Queries, Keys, and Values matrix). The num of Q-K pairs is less than the full attention so that we can get a faster calculation speed, but why the memory reduction has a similar trend with the time reduction? Seems the TVM kernel does not use any technique to save the memory, and the padding 0 values are also int32, but the fact is that TVM implementation is memory efficient...
Looking forward to your reply.
The text was updated successfully, but these errors were encountered: