Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

src/fastertransformer/kernels/decoder_masked_multihead_attention /decoder_masked_multihead_attention_template.hpp:36 open this macro definition, it'll find a build error #763

Open
pengl opened this issue Oct 11, 2023 · 0 comments
Labels
bug Something isn't working

Comments

@pengl
Copy link

pengl commented Oct 11, 2023

Branch/Tag/Commit

main

Docker Image Version

nvcr.io/nvidia/pytorch:22.08-py3

GPU name

A10

CUDA Driver

515.65.01

Reproduced Steps

https://github.com/NVIDIA/FasterTransformer/blob/f0b5b8631806aedfbe0d844eb9a32202002dd463/src/fastertransformer/kernels/decoder_masked_multihead_attention/decoder_masked_multihead_attention_template.hpp#L38

open the macro "MMHA_USE_FP32_ACUM_FOR_LOGITS", it'll find compile errors.
how to open the macro? what else need to do more?

Tasks

Preview Give feedback
No tasks being tracked yet.
@pengl pengl added the bug Something isn't working label Oct 11, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant