-
Notifications
You must be signed in to change notification settings - Fork 6.8k
Conversation
const char **keys, | ||
const char **vals, | ||
SymbolHandle *out); | ||
int MXSymbolSetAttrs(SymbolHandle symbol, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
seems no changes in here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh i see it now
Need to enable attributes in https://github.com/dmlc/mxnet-memonger for backing memory monger |
Refactor it and merge into mxnet? |
to make sure things won;t break, can you push a PR to https://github.com/dmlc/mxnet-memonger to change the attribute force_mirroring to |
only julia failed. please review again and merge |
std::unordered_map<std::string, std::string> kwargs; | ||
for (nn_uint i = 0; i < num_param; ++i) { | ||
bool flag = false; | ||
for (const auto &k : kHiddenKeys) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am a bit worried about this procedure. Direct checking things in here can make things slow. We will need a speed test under cython mode to confirm this do not affect the symbol composition speed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do you have scripts for this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it's only a loop over 3 elements, so hopefully won't be a big deal
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you use something like a global map structure? so it is O(1)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am fine merging it if it will get eventually deleted after evolution.
For speed testing, simply do elementwise add composition and measure ops /sec
See my comment about potential speed issues. |
@tqchen jxie@jxie-gpu:/scratch/mxnet$ MXNET_ENFORCE_CYTHON=1 python -m timeit -s "import mxnet as mx; a = mx.sym.Variable('A'); b = mx.sym.Variable('B')" "c = 10*a + 5/b" |
refactor lr_mult, wd_mult, ctx_group revert test update submodule fix
a historical way is to do something like import mxnet as mx
import time
a = mx.sym.Variable('a')
b = mx.sym.Variable('b')
start = time.time()
nrep = 100
nstep = 10000
for i in range(nrep):
for j in range(nstep):
a = a + b
end = time.time()
print("%g ops/sec" % nrep* nstep/(end-start))
``` |
Need to measure the speed under cython build, both before and after the change |
before the change: so the difference is 10%. Doesn't seem to matter much |
will the set of kHiddenArgs grow later, or is it only for deprecation warning purposes? If it is, we can make sure they get removed in next major version |
its the difference between typing Activation(lr_mult=xx) or Activation(lr_mult=xx) |
I see, I am fine with merging it if you can run a python test script I posted. I hope we do not do such automatic conversion in the backend for more arguments, but instead ask user to directly write underscore for front-end attributes(or have some function so help doing so) |
84245.5 ops/sec |
refactor lr_mult, wd_mult, ctx_group revert test update submodule fix
refactor lr_mult, wd_mult, ctx_group revert test update submodule fix
refactor lr_mult, wd_mult, ctx_group revert test update submodule fix
refactor lr_mult, wd_mult, ctx_group revert test update submodule fix
refactor lr_mult, wd_mult, ctx_group revert test update submodule fix
refactor lr_mult, wd_mult, ctx_group revert test update submodule fix
refactor lr_mult, wd_mult, ctx_group revert test update submodule fix
refactor lr_mult, wd_mult, ctx_group
revert test
update submodule