You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This could lead to portability issues as it allows a website author to test on their desktop computer which may have different capabilities than a mobile phone or tablet. Or perhaps they test on a Windows or Linux computer and don't realize their models are not compatible with macOS until they receive bug reports.
If a GPUDeviceDescriptor is not passed, a default set of capabilities are available with the goal to ensure the website runs on any device where WebGPU is supported.
It would seem reasonable to have a similar design as WebGPU in WebNN, where the ML interface could report which capabilities are available, but fall back to a common set of supported limits and features so that websites which use the defaults will run everywhere WebNN is supported.
There are various ways to achieve this, one being as simply as adding an optional MLOpSupportLimits parameter to ML.createContext, if one is not passed, a common set of defaults which are supported everywhere WebNN is supported would be used.
The text was updated successfully, but these errors were encountered:
ML.createContext as currently specified takes an optional MLContextOptions:
The MLContext has a dictionary of MLOpSupportedLimits which correspond to the device capabilities the user is running on:
This could lead to portability issues as it allows a website author to test on their desktop computer which may have different capabilities than a mobile phone or tablet. Or perhaps they test on a Windows or Linux computer and don't realize their models are not compatible with macOS until they receive bug reports.
Compare this to the design of WebGPU, where the call to GPUAdapter.requestDevice takes an optional GPUDeviceDescriptor.
If a GPUDeviceDescriptor is not passed, a default set of capabilities are available with the goal to ensure the website runs on any device where WebGPU is supported.
It would seem reasonable to have a similar design as WebGPU in WebNN, where the ML interface could report which capabilities are available, but fall back to a common set of supported limits and features so that websites which use the defaults will run everywhere WebNN is supported.
There are various ways to achieve this, one being as simply as adding an optional
MLOpSupportLimits
parameter toML.createContext
, if one is not passed, a common set of defaults which are supported everywhere WebNN is supported would be used.The text was updated successfully, but these errors were encountered: