Description
Describe the feature and the current behavior/state.
Everything is in the title. It's hard to monitor this. Having a test in the CI would help us make sure we don't start using private/experimental APIs. The more we use them, the harder it's going to be when we bump tensorflow versions. It's also a requirements if we want to be able to support multiple versions of tensorflow at the same time (I know, crazy talk, right?).
In the long run, that's one more check that we won't have to do manually as maintainers.
We should also check for tf.compat.v1
. If there is a tf.compat.v1
that we can't remove, it means that we should ask the TF team that they add an API for an equivalent function.
Relevant information
- Are you willing to contribute it (yes/no): yes
- Are you willing to maintain it going forward? (yes/no): yes, we might even need it for keras
- Is there a relevant academic paper? (if so, where): no
- Is there already an implementation in another framework? (if so, where): not to my knowledge
- Was it part of tf.contrib? (if so, where):no
Which API type would this fall under (layer, metric, optimizer, etc.)
Tests
Who will benefit with this feature?
Us devs, and users because we'll be able to support more TF versions.
Any other info.
I've no idea about how to implement that cleanly. (analysing the AST maybe?) Ideas welcome.