Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

#2066 doctest update layers #2109

Closed

Conversation

nataliyah123
Copy link
Contributor

@nataliyah123 nataliyah123 commented Aug 22, 2020

part of #2066

@googlebot

This comment has been minimized.

@bot-of-gabrieldemarmiesse

@tanzhenyu @cgarciae @charlielito

You are owners of some files modified in this pull request.
Would you kindly review the changes whenever you have the time to?
Thank you very much.

@nataliyah123 nataliyah123 reopened this Aug 23, 2020
@nataliyah123
Copy link
Contributor Author

@googlebot I signed it!

@googlebot
Copy link

CLAs look good, thanks!

ℹ️ Googlers: Go here for more info.

@nataliyah123 nataliyah123 changed the title #2066 doctest update layers [WIP]#2066 doctest update layers Aug 23, 2020
hyang0129 and others added 10 commits September 14, 2020 10:39
* initial setup. need to build tests

* build some tests. need to test them

* fixed typo

* created first test

* created first test

* accidentally messed up another file

* accidentally messed up another file

* accidentally messed up another file

* added run all distributed

* fixed formatting

* trying to fix tests not running on github CI.

* realized that I should probably add the new optimizer files to the build and init

* added typeguard and docstring

* removed run_all_distributed

* graph and eager testing for SGD

* reformatted

* added distributed tests

* removed distributed tests

* reverted discriminative layer grad adjust back to apply gradients

* added distributed tests with one time virtual device init

* increased tolerance for distributed
added comments explaining tests

* changed how distributed is recognized for increasing tolerance

* Redesigned Logic into Optimizer Wrapper (#1)

* redesigned methodology to use multiple optimizers (one per unique LR) and pass grads to these multiple optimizers. Should allow for complex optimizers to behave properly

* adjusted behavior of resource apply to only return the op if the lr_mult matches the lr_mult of the optimizer
should only return 1 op for each var.

* updated init file
changed training config

* removed variable position and added some more comments

* removed grouped variables as unnecessary

* reformatted

* updated documentation
explicitly defined serialization as not supported

* added typecheck for name

* added typecheck for name

* fixed blank line at end of init file

* realized no new line meant to add new line
guessing that build file needs to be in alpha order?

* ran buildifier

* fixed accidentally affecting moving average

* changed print to logging.info

* changed print to logging.info

* Revert "changed print to logging.info"

This reverts commit 3fa5e19

* added tutorial.
tutorial doesn't import from tfa. May need to remove from PR.
Please let me know

* refactored to use static method
refactored to use getattr
updated warning on not using lr_mult
expanded on some docstrings

* updated the usage of lr_mult in variables

* renamed discriminative wrapper to disclayeropt

* added note to disuade directly calling apply_gradients

* updated toy_cnn to use tempdir and no longer call context.eager
implemented toy_rnn function with same flow as toycnn

* added toy_rnn and sgd to the test permutations

* refactored permutes and train results into private fns

* reformatted files and fixed flake 8 issues
fixed bad references when lr_mult was changed

* added missing functions in prep for tests

* updated assign lr mult and explained further why
refactored get lowest layers to assign sublayers
explained recursively assign sublayers better

* forgot to run black so ran it to reformat

* specified inputshape for rnn

* increased size of test
temporarily removed SGD opt. Double opts doubles the number of tests
to run so just need to see how long this one takes.

* remove toy rnn for now

* changed back to medium. maybe large was not actually increasing runtime

* fixed input layer

* fixed input layer being in wrong place

* virtual device modification issue

* fixed incorrect usage of lr_mult

* added comments for tests explaining them better
added toy rnn for testing

* added new test
fix toy rnn initialization

* fixed typo

* added inputshape so that pretrained rnn generates weights

* changed test to allow head to learn. it should move the loss better

* reformatted

* fixed test for variable assignment
added get config and from config

* reformatted

* fixed layer references from 1 to 0 because input layer isn't counted
as an actual layer in the layer list

* reformatted

* increased lr and epochs because learning was happning, but assertless
tolerance too low

* attempting to use run distributed from test utils

* removed tutorial

* switched to alternative distributed training method

* trying to use run distributed without graph and eager

* trying to use run_distributed

* seems that doing any tensorstuff before tf.test.main creates the issue. changed models to auto check if weights exist and create or load

* forgot to return a model on first run of model fn

* create model weights on init

* changed how args are passed for testcase

* changed how args are passed for testcase

* try fix init

* trying to init weights on model properly

* trying to init weights on model properly

* just trying all the possibilities

* trying to fix weights setup

* expanded some comments for some tests

* fixed some docstrings and expanded on some comments

* reformatted files

expanded on many comments and added full stops

fixed get/from_config based on optimzierv2

added model checkpoint test

* capitalized comments properly.

* removed sgd, reduced size of training inputs.

* simplified checkpoint name

* reformatted

* remove run tests in notebook

* updated README.md
fixed indent for __init__
added test for from config and to config

* fixed formatting

* removed distributed tests and added a warning if optimizer is initialized within a strategy scope

* renamed test_wrap to wrap_test bc pytest thought it was a test.

* converting tests into the pytest framework

* converted tests and parameterized

* cleaned up code

* added additional checks and doc string for changes in lr multiplier during training.

* changed comment

* Simplified discriminative layer training by using a multi optimizer wrapper class.

Removed old tests and added new tests conforming to pytest standard.

* Refactored code using black and flake8

* updated init file

* fixed typeguard error and usage of private/experimental api.

* restructured wrapper serialization and removed unnecessary components.

* expanded on docstr and added repr

* cleaned up docstrings, added assertion tests, and added explicit test for only the serialization

* ran black and flake8

* fixed doc string

Co-authored-by: gabrieldemarmiesse <gabrieldemarmiesse@gmail.com>
* added doctests layer_norm_simple

* added doctests peephole_lstm_cell.py

* minor changes

* including output for example

* Updated examples to be descriptive and standardized
* Create noisy_dense.py

* Create noisy_dense_test.py

* Update __init__.py

* Fix minor typo

* Update noisy_dense_test.py

* Update comments

* Update comments

* Update noisy_dense.py

* fix typo

* Update noisy_dense.py

* Update noisy_dense_test.py

* Fix compliance issues

* Fix compliance issues

* Update comments

* Fix typo

* Update CODEOWNERS

* Update CODEOWNERS

* add use bias to config

* Update noisy_dense.py

* Update CODEOWNERS

* Revert "Update CODEOWNERS"

This reverts commit 82e979f.

* Update noisy_dense.py

* Update noisy_dense.py

* Update noisy_dense.py

* Update noisy_dense.py

* Revert "Update CODEOWNERS"

This reverts commit 840ab1c.

* Revert "Revert "Update CODEOWNERS""

This reverts commit 7852e62.

* Update noisy_dense.py

* Code reformatted with updated black

* Update noisy_dense.py

* Update noisy_dense.py

* Update noisy_dense.py

* Added support for manual noise reset

* support for noise removal

* tests for noise removal

* use typecheck and remove unicode,

* fix typo and code cleanup

* control noise removal through call

* Inherit from Dense instead of Layer

* Added missing comment

* Documentation and test improvement

* fix typo

* minor formatting changes

* minor formatting fix

Co-authored-by: schaall <52867365+schaall@users.noreply.github.com>
* Added stochastic depth layer

* Fixed code style and added missing __init__ entry

* Fixed tests and style

* Fixed code style

* Updated CODEOWNERS

* Added codeowners for tests

* Changes after code review

* Test and formatting fixes

* Fixed doc string

* Added mixed precision test

* Further code review changes

* Code review changes
* Added filtered_input and constrained_decoding

- Fixes tensorflow#607

We can have a common function for creating filtered_inputs used in
crf_multitag_sequence_score
This function can be reused to modify the input to crf_decode to
support constrained decoding.

* Fixed formatting and imports

* Fixed documentation

* Fixed formatting
* Moved build_docs.py and BUILD into /tools/docs/

* Modified paths in documentation

* removing build_docs.py from BUILD

* updating to bazel code format

* Revert "updating to bazel code format"

This reverts commit f97c2ad.

* Revert "removing build_docs.py from BUILD"

This reverts commit 3967d14.

* Updated sanity_check.dockerfile with new path
* all_done

* cohen_kapa_space

* requested changes
@googlebot
Copy link

We found a Contributor License Agreement for you (the sender of this pull request), but were unable to find agreements for all the commit author(s) or Co-authors. If you authored these, maybe you used a different email address in the git commits than was used to sign the CLA (login here to double check)? If these were authored by someone else, then they will need to sign a CLA as well, and confirm that they're okay with these being contributed to Google.
In order to pass this check, please resolve this problem and then comment @googlebot I fixed it.. If the bot doesn't comment, it means it doesn't think anything has changed.

ℹ️ Googlers: Go here for more info.

@nataliyah123
Copy link
Contributor Author

@gabrieldemarmiesse. Can you please add the cla:yes label or tell me what to do to clear the CLA/google check. I had to close one of my previous pr and open a new one just because of this. Things I have done.

  • I had pulled the master first then checked out to the new branch and pulled the branch associated with this pr on my local system. still facing the problem.

@nataliyah123
Copy link
Contributor Author

@googlebot I fixed it.

@WindQAQ
Copy link
Member

WindQAQ commented Sep 23, 2020

@nataliyah123 Can you reopen a new PR for it? Seems that the branch is messed up. FYI, the way I use to pull the latest master is

git remote add upstream https://github.com/tensorflow/addons.git
git checkout your-branch
git pull upstream master

@nataliyah123
Copy link
Contributor Author

@nataliyah123 Can you reopen a new PR for it? Seems that the branch is messed up. FYI, the way I use to pull the latest master is

git remote add upstream https://github.com/tensorflow/addons.git
git checkout your-branch
git pull upstream master

@WindQAQ I have the upstream as my remote too. The reason I pulled my own branch from origin was I had made the changes online(github) which I wanted to pull. what you have described is for a brand new branch without anything to be pulled from origin. Google bot does not like new email addresses which are not registered in the form and removes the cla.yes label and the workflow I described here is because of the comment made by gabrieldemarmiesse here

@nataliyah123
Copy link
Contributor Author

opening a new branch as requested.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.