-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
In-Datacenter Performance Analysis of a Tensor Processing Unit (TPU) #3
Comments
结构图中其他的部分基本都是为尽可能跑满这个矩阵计算阵列服务的,据此有以下设计参考博客 |
TPU使用了脉动阵列技术
2.脉动矩阵在计算时,列方向的向量乘加运行是串行的(每两个元素乘后结果往下流,作为下一个PE的加法操作数),但是横方向的数据队列源源不断进数填充PE,所以多个向量的乘加运算结果是流水产生 更多参考我们应该拥抱“脉动阵列”吗- 对 Google TPU 可扩展性的思考 |
谷歌TPU
The text was updated successfully, but these errors were encountered: