-
Notifications
You must be signed in to change notification settings - Fork 2k
Description
Please make sure that this is a feature request. As per our GitHub Policy, we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:feature_template
System information
- TensorFlow.js version (you are using): 2.4.0
- Are you willing to contribute it (Yes/No):
Describe the feature and the current behavior/state.
For ML models that involve media processing (i.e. bodypix model), there are couple places where expensive data copying could happen:
- Creating tensor from result of video processing pipeline. Although TFJS has fromPixels API, but it requires a html element (video/image), for data that is processed on GPU it will need some work-around (download to cpu or create html element)
- tensors from the ML execution can not be executed on GPU after the inference, unless using TFJS op APIs. If you have post-processing steps after the ML model execution that need to be executed on GPU, you will end up downloading the tensors and re-upload again.
Proposing two new tensor APIs to solve above problems, these APIs target tensors that are located on WebGL backend:
- create tensor using WebGL texture handle, this allows data from GPU pre-processing steps to be converted to tensor.
- retrieve texture handle for tensor, this would prevent downloading the tensor from GPU and enable continue post-processing on GPU.
Will this change the current api? How?
This would not change any of the current APIs, and the proposed APIs shall throw exception for tensors that are not on WebGL backend.
Who will benefit with this feature?
Media processing pipeline that utilize TFJS for model execution with WebGL acceleration.
Any Other info.