Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

float16/complex32? #8435

Open
charliez0 opened this issue Nov 16, 2024 · 4 comments
Open

float16/complex32? #8435

charliez0 opened this issue Nov 16, 2024 · 4 comments

Comments

@charliez0
Copy link

Please make sure that this is a feature request. As per our GitHub Policy, we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:feature_template

System information

  • TensorFlow.js version (you are using): 4.22.0
  • Are you willing to contribute it (Yes/No): Yes

Describe the feature and the current behavior/state.

Will this change the current api? How?

Who will benefit with this feature?

Any Other info.

@charliez0 charliez0 added the type:feature New feature or request label Nov 16, 2024
@shmishra99 shmishra99 self-assigned this Nov 17, 2024
@shmishra99
Copy link
Contributor

Hi @charliez0 ,

Could you please elaborate on the feature and how it will benefit the community?

Thank You!!

@charliez0
Copy link
Author

charliez0 commented Nov 17, 2024

I'm using tfjs-node-gpu, which supports fp16 fp8 etc, but it seems that there is no ways to initialize tensors with that spec.

Also, is there any support to use fp64 and complex128?

@shmishra99
Copy link
Contributor

Hi @charliez0 ,

We can define the dtype in @tensorflow/tfjs-node-gpu while initializing tensors. Currently, it supports 'float32', 'int32', 'bool', 'complex64', and 'string' data types.

const tf = require('@tensorflow/tfjs-node-gpu');
const floatTensor = tf.tensor2d([[1.0, 2.0], [3.0, 4.0]], [2, 2], 'float32');
console.log(floatTensor);

Let me know if i missed anything. Thank You!!

@charliez0
Copy link
Author

missing float64,complex128,float16,bfloat16,int16,int8 etc

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants