Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

Prepare for 0.12.0 Release #8263

Merged
merged 8 commits into from
Oct 14, 2017
Merged

Prepare for 0.12.0 Release #8263

merged 8 commits into from
Oct 14, 2017

Conversation

mbaijal
Copy link
Contributor

@mbaijal mbaijal commented Oct 13, 2017

Description

  • Bump up version to 0.12.0
  • Update NEWS for v0.12.0 and link to release notes

Checklist

Essentials

  • Passed code style checking (make lint)
  • Changes are complete (i.e. I finished coding on this PR)
  • To my best knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change

Changes

  • Updated all the necessary files with version 0.12.0

Comments

Coming up in In another PR before tagging the RC -
Updated README.md
Updated NEWS.md if needed

NEWS.md Outdated
@@ -1,5 +1,30 @@
MXNet Change Log
================
## 0.12.0
### - New Features - Sparse Tensor Support
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't need the leading -

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

NEWS.md Outdated
- `NDArray` and `Symbol` now supports "fluent" methods. You can now use `x.exp()` etc instead of `mx.nd.exp(x)` or `mx.sym.exp(x)`
- Added `mx.rtc.CudaModule` for writing and running CUDA kernels from python
- Added `multi_precision` option to optimizer for easier float16 training
### - Performance
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are two other performance improvements removed?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added 1 back as discussed - " Enabled JIT compilation. Autograd and Gluon hybridize now use less memory and has faster speed. Performance is almost the same with old symbolic style code"

NEWS.md Outdated
- Added `mx.autograd.grad` and experimental second order gradient support (though most operators don't support second order gradient yet)
- Added `ConvLSTM` etc to `gluon.contrib`
- Autograd now supports cross-device graphs. Use `x.copyto(mx.gpu(i))` and `x.copyto(mx.cpu())` to do computation on multiple devices.
### - Other New Features
Copy link
Contributor

@piiswrong piiswrong Oct 13, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also list limited support for fancy indexing. x[idx_arr0, idx_arr1, ..., idx_arrn] is now supported. Full support coming soon in next release. Checkout master to get a preview

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@@ -21,7 +21,7 @@
# Built files are stored in $built
# Version numbers are stored in $tag_list.
# Version numbers are ordered from latest to old and final one is master.
tag_list="0.11.0.rc3 master"
tag_list="0.12.0 0.11.0 master"
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sandeep-krishnamurthy @kevinthesun
Should this change be made now or after the RC is final?

NEWS.md Outdated
## 0.12.0
### New Features - Sparse Tensor Support
- Added comprehensive support for sparse matrices. See help on `mx.sym.sparse` and `mx.nd.sparse` for more info.
- Limited support for fancy indexing. x[idx_arr0, idx_arr1, ..., idx_arrn] is now supported. Full support coming soon in next release. Checkout master to get a preview.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fancy indexing is for normal NDArray

Copy link
Contributor Author

@mbaijal mbaijal Oct 13, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@eric-haibin-lin Can you please edit the line and let me know how you want to rephrase it?I will update the PR

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am just saying that this doesn't belong to sparse tensor

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Moved to other features. Done.

NEWS.md Outdated
### Performance
- Enabled JIT compilation. Autograd and Gluon hybridize now use less memory and has faster speed. Performance is almost the same with old symbolic style code.
- Full support for NVidia Volta GPU Architecture and Cuda 9. Training is up to 3.5x faster than Pascal when using float16.
### - API Changes
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why is there a dash before API Changes?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch 👍

NEWS.md Outdated
@@ -1,11 +1,38 @@
MXNet Change Log
================
## 0.12.0
### New Features - Sparse Tensor Support
- Added comprehensive support for sparse matrices. See help on `mx.sym.sparse` and `mx.nd.sparse` for more info.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • Added limited cpu support for two sparse formats for Symbol and NDArray - CSRNDArray and RowSparseNDArray
  • Added a sparse dot product operator and many element-wise sparse operators
  • Added a data iterator for sparse data input - LibSVMIter
  • Added three optimizers for sparse gradient updates: Ftrl, SGD and Adam
  • Added push and row_sparse_pull with RowSparseNDArray in distributed kvstore

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should I replace the existing point or add subpoints?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah please replace the existing point. comprehensive support is misleading since we dont have gpu support. Thanks!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, all the formats are gone.. :(

NEWS.md Outdated
- Added a data iterator for sparse data input - LibSVMIter
- Added three optimizers for sparse gradient updates: Ftrl, SGD and Adam
- Added push and row_sparse_pull with RowSparseNDArray in distributed kvstore
- For more information see [full release notes](https://cwiki.apache.org/confluence/display/MXNET/MXNet+0.12.0+Release+Notes)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

    • For more information see full release notes
      Why is this put under sparse? Should we move it to the end?

@piiswrong piiswrong changed the base branch from master to dev October 14, 2017 00:44
@piiswrong piiswrong merged commit a2c3c37 into apache:dev Oct 14, 2017
piiswrong pushed a commit to piiswrong/mxnet that referenced this pull request Oct 14, 2017
* Preparing for 0.12.0 Release

* Correcting Syntax in NEWS.md

* Adding to NEWS.md

* Adding to NEWS.md

* build_all_version.sh should be updated only after RC passes

* Edits

* Changes to NEWS.md

* formatting
piiswrong pushed a commit that referenced this pull request Oct 14, 2017
* Preparing for 0.12.0 Release

* Correcting Syntax in NEWS.md

* Adding to NEWS.md

* Adding to NEWS.md

* build_all_version.sh should be updated only after RC passes

* Edits

* Changes to NEWS.md

* formatting
crazy-cat pushed a commit to crazy-cat/incubator-mxnet that referenced this pull request Oct 26, 2017
* Preparing for 0.12.0 Release

* Correcting Syntax in NEWS.md

* Adding to NEWS.md

* Adding to NEWS.md

* build_all_version.sh should be updated only after RC passes

* Edits

* Changes to NEWS.md

* formatting
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants