Skip to content
This repository was archived by the owner on Nov 17, 2023. It is now read-only.

[RELASE] tag 0.7 release#2247

Merged
tqchen merged 1 commit intoapache:masterfrom
tqchen:master
May 26, 2016
Merged

[RELASE] tag 0.7 release#2247
tqchen merged 1 commit intoapache:masterfrom
tqchen:master

Conversation

@tqchen
Copy link
Member

@tqchen tqchen commented May 26, 2016

(Incomplete) list of changes, thanks to all contributors

  • 0.6 is skipped because there are a lot of improvements since initial release
  • More math operators
    • elementwise ops and binary ops
  • Attribute support in computation graph
    • Now user can use attributes to give various hints about specific learning rate, allocation plans etc
  • MXNet is more memory efficient
    • Support user defined memory optimization with attributes
  • Support mobile applications by @antinucleon
  • Refreshed update of new documents
  • Model parallel training of LSTM by @tqchen
  • Simple operator refactor by @tqchen
    • add operator_util.h to enable quick registration of both ndarray and symbolic ops
  • Distributed training by @mli
  • Support Torch Module by @piiswrong
    • MXNet now can use any of the modules from Torch.
  • Support custom native operator by @piiswrong
  • Support data types including fp16, fp32, fp64, int32, and uint8 by @piiswrong
  • Support monitor for easy printing and debugging by @piiswrong
  • Support new module API by @pluskid
    • Module API is a middle level API that can be used in imperative manner like Torch-Module
  • Support bucketing API for variable length input by @pluskid
  • Support CuDNN v5 by @antinucleon
  • More applications

@tqchen tqchen merged commit 118b37e into apache:master May 26, 2016
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant