-
Notifications
You must be signed in to change notification settings - Fork 615
Add license #1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Add license #1
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sounds like this is the correct one :-) |
Yes, all of TensorFlow is Apache 2 licensed. Another todo would be a CONTRIBUTING file to add; perhaps in a little while when you've figured out a good workflow. |
Oh, and we ought to probably think of a README as a fairly high priority once we have code inside, to clarify expectations. |
ewilderj
approved these changes
Nov 29, 2018
Squadrick
pushed a commit
to Squadrick/addons
that referenced
this pull request
Mar 26, 2019
WindQAQ
pushed a commit
that referenced
this pull request
Sep 14, 2020
* initial setup. need to build tests * build some tests. need to test them * fixed typo * created first test * created first test * accidentally messed up another file * accidentally messed up another file * accidentally messed up another file * added run all distributed * fixed formatting * trying to fix tests not running on github CI. * realized that I should probably add the new optimizer files to the build and init * added typeguard and docstring * removed run_all_distributed * graph and eager testing for SGD * reformatted * added distributed tests * removed distributed tests * reverted discriminative layer grad adjust back to apply gradients * added distributed tests with one time virtual device init * increased tolerance for distributed added comments explaining tests * changed how distributed is recognized for increasing tolerance * Redesigned Logic into Optimizer Wrapper (#1) * redesigned methodology to use multiple optimizers (one per unique LR) and pass grads to these multiple optimizers. Should allow for complex optimizers to behave properly * adjusted behavior of resource apply to only return the op if the lr_mult matches the lr_mult of the optimizer should only return 1 op for each var. * updated init file changed training config * removed variable position and added some more comments * removed grouped variables as unnecessary * reformatted * updated documentation explicitly defined serialization as not supported * added typecheck for name * added typecheck for name * fixed blank line at end of init file * realized no new line meant to add new line guessing that build file needs to be in alpha order? * ran buildifier * fixed accidentally affecting moving average * changed print to logging.info * changed print to logging.info * Revert "changed print to logging.info" This reverts commit 3fa5e19 * added tutorial. tutorial doesn't import from tfa. May need to remove from PR. Please let me know * refactored to use static method refactored to use getattr updated warning on not using lr_mult expanded on some docstrings * updated the usage of lr_mult in variables * renamed discriminative wrapper to disclayeropt * added note to disuade directly calling apply_gradients * updated toy_cnn to use tempdir and no longer call context.eager implemented toy_rnn function with same flow as toycnn * added toy_rnn and sgd to the test permutations * refactored permutes and train results into private fns * reformatted files and fixed flake 8 issues fixed bad references when lr_mult was changed * added missing functions in prep for tests * updated assign lr mult and explained further why refactored get lowest layers to assign sublayers explained recursively assign sublayers better * forgot to run black so ran it to reformat * specified inputshape for rnn * increased size of test temporarily removed SGD opt. Double opts doubles the number of tests to run so just need to see how long this one takes. * remove toy rnn for now * changed back to medium. maybe large was not actually increasing runtime * fixed input layer * fixed input layer being in wrong place * virtual device modification issue * fixed incorrect usage of lr_mult * added comments for tests explaining them better added toy rnn for testing * added new test fix toy rnn initialization * fixed typo * added inputshape so that pretrained rnn generates weights * changed test to allow head to learn. it should move the loss better * reformatted * fixed test for variable assignment added get config and from config * reformatted * fixed layer references from 1 to 0 because input layer isn't counted as an actual layer in the layer list * reformatted * increased lr and epochs because learning was happning, but assertless tolerance too low * attempting to use run distributed from test utils * removed tutorial * switched to alternative distributed training method * trying to use run distributed without graph and eager * trying to use run_distributed * seems that doing any tensorstuff before tf.test.main creates the issue. changed models to auto check if weights exist and create or load * forgot to return a model on first run of model fn * create model weights on init * changed how args are passed for testcase * changed how args are passed for testcase * try fix init * trying to init weights on model properly * trying to init weights on model properly * just trying all the possibilities * trying to fix weights setup * expanded some comments for some tests * fixed some docstrings and expanded on some comments * reformatted files expanded on many comments and added full stops fixed get/from_config based on optimzierv2 added model checkpoint test * capitalized comments properly. * removed sgd, reduced size of training inputs. * simplified checkpoint name * reformatted * remove run tests in notebook * updated README.md fixed indent for __init__ added test for from config and to config * fixed formatting * removed distributed tests and added a warning if optimizer is initialized within a strategy scope * renamed test_wrap to wrap_test bc pytest thought it was a test. * converting tests into the pytest framework * converted tests and parameterized * cleaned up code * added additional checks and doc string for changes in lr multiplier during training. * changed comment * Simplified discriminative layer training by using a multi optimizer wrapper class. Removed old tests and added new tests conforming to pytest standard. * Refactored code using black and flake8 * updated init file * fixed typeguard error and usage of private/experimental api. * restructured wrapper serialization and removed unnecessary components. * expanded on docstr and added repr * cleaned up docstrings, added assertion tests, and added explicit test for only the serialization * ran black and flake8 * fixed doc string Co-authored-by: gabrieldemarmiesse <gabrieldemarmiesse@gmail.com>
jrruijli
pushed a commit
to jrruijli/addons
that referenced
this pull request
Dec 23, 2020
* initial setup. need to build tests * build some tests. need to test them * fixed typo * created first test * created first test * accidentally messed up another file * accidentally messed up another file * accidentally messed up another file * added run all distributed * fixed formatting * trying to fix tests not running on github CI. * realized that I should probably add the new optimizer files to the build and init * added typeguard and docstring * removed run_all_distributed * graph and eager testing for SGD * reformatted * added distributed tests * removed distributed tests * reverted discriminative layer grad adjust back to apply gradients * added distributed tests with one time virtual device init * increased tolerance for distributed added comments explaining tests * changed how distributed is recognized for increasing tolerance * Redesigned Logic into Optimizer Wrapper (tensorflow#1) * redesigned methodology to use multiple optimizers (one per unique LR) and pass grads to these multiple optimizers. Should allow for complex optimizers to behave properly * adjusted behavior of resource apply to only return the op if the lr_mult matches the lr_mult of the optimizer should only return 1 op for each var. * updated init file changed training config * removed variable position and added some more comments * removed grouped variables as unnecessary * reformatted * updated documentation explicitly defined serialization as not supported * added typecheck for name * added typecheck for name * fixed blank line at end of init file * realized no new line meant to add new line guessing that build file needs to be in alpha order? * ran buildifier * fixed accidentally affecting moving average * changed print to logging.info * changed print to logging.info * Revert "changed print to logging.info" This reverts commit 3fa5e19 * added tutorial. tutorial doesn't import from tfa. May need to remove from PR. Please let me know * refactored to use static method refactored to use getattr updated warning on not using lr_mult expanded on some docstrings * updated the usage of lr_mult in variables * renamed discriminative wrapper to disclayeropt * added note to disuade directly calling apply_gradients * updated toy_cnn to use tempdir and no longer call context.eager implemented toy_rnn function with same flow as toycnn * added toy_rnn and sgd to the test permutations * refactored permutes and train results into private fns * reformatted files and fixed flake 8 issues fixed bad references when lr_mult was changed * added missing functions in prep for tests * updated assign lr mult and explained further why refactored get lowest layers to assign sublayers explained recursively assign sublayers better * forgot to run black so ran it to reformat * specified inputshape for rnn * increased size of test temporarily removed SGD opt. Double opts doubles the number of tests to run so just need to see how long this one takes. * remove toy rnn for now * changed back to medium. maybe large was not actually increasing runtime * fixed input layer * fixed input layer being in wrong place * virtual device modification issue * fixed incorrect usage of lr_mult * added comments for tests explaining them better added toy rnn for testing * added new test fix toy rnn initialization * fixed typo * added inputshape so that pretrained rnn generates weights * changed test to allow head to learn. it should move the loss better * reformatted * fixed test for variable assignment added get config and from config * reformatted * fixed layer references from 1 to 0 because input layer isn't counted as an actual layer in the layer list * reformatted * increased lr and epochs because learning was happning, but assertless tolerance too low * attempting to use run distributed from test utils * removed tutorial * switched to alternative distributed training method * trying to use run distributed without graph and eager * trying to use run_distributed * seems that doing any tensorstuff before tf.test.main creates the issue. changed models to auto check if weights exist and create or load * forgot to return a model on first run of model fn * create model weights on init * changed how args are passed for testcase * changed how args are passed for testcase * try fix init * trying to init weights on model properly * trying to init weights on model properly * just trying all the possibilities * trying to fix weights setup * expanded some comments for some tests * fixed some docstrings and expanded on some comments * reformatted files expanded on many comments and added full stops fixed get/from_config based on optimzierv2 added model checkpoint test * capitalized comments properly. * removed sgd, reduced size of training inputs. * simplified checkpoint name * reformatted * remove run tests in notebook * updated README.md fixed indent for __init__ added test for from config and to config * fixed formatting * removed distributed tests and added a warning if optimizer is initialized within a strategy scope * renamed test_wrap to wrap_test bc pytest thought it was a test. * converting tests into the pytest framework * converted tests and parameterized * cleaned up code * added additional checks and doc string for changes in lr multiplier during training. * changed comment * Simplified discriminative layer training by using a multi optimizer wrapper class. Removed old tests and added new tests conforming to pytest standard. * Refactored code using black and flake8 * updated init file * fixed typeguard error and usage of private/experimental api. * restructured wrapper serialization and removed unnecessary components. * expanded on docstr and added repr * cleaned up docstrings, added assertion tests, and added explicit test for only the serialization * ran black and flake8 * fixed doc string Co-authored-by: gabrieldemarmiesse <gabrieldemarmiesse@gmail.com>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
@karmel @ewilderj
Can either of you confirm this is the correct license (copied from tensorflow/tensorflow)?