-
Notifications
You must be signed in to change notification settings - Fork 6.8k
Conversation
NEWS.md
Outdated
@@ -1,5 +1,30 @@ | |||
MXNet Change Log | |||
================ | |||
## 0.12.0 | |||
### - New Features - Sparse Tensor Support |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't need the leading -
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
NEWS.md
Outdated
- `NDArray` and `Symbol` now supports "fluent" methods. You can now use `x.exp()` etc instead of `mx.nd.exp(x)` or `mx.sym.exp(x)` | ||
- Added `mx.rtc.CudaModule` for writing and running CUDA kernels from python | ||
- Added `multi_precision` option to optimizer for easier float16 training | ||
### - Performance |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are two other performance improvements removed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added 1 back as discussed - " Enabled JIT compilation. Autograd and Gluon hybridize now use less memory and has faster speed. Performance is almost the same with old symbolic style code"
NEWS.md
Outdated
- Added `mx.autograd.grad` and experimental second order gradient support (though most operators don't support second order gradient yet) | ||
- Added `ConvLSTM` etc to `gluon.contrib` | ||
- Autograd now supports cross-device graphs. Use `x.copyto(mx.gpu(i))` and `x.copyto(mx.cpu())` to do computation on multiple devices. | ||
### - Other New Features |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also list limited support for fancy indexing. x[idx_arr0, idx_arr1, ..., idx_arrn] is now supported. Full support coming soon in next release. Checkout master to get a preview
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
@@ -21,7 +21,7 @@ | |||
# Built files are stored in $built | |||
# Version numbers are stored in $tag_list. | |||
# Version numbers are ordered from latest to old and final one is master. | |||
tag_list="0.11.0.rc3 master" | |||
tag_list="0.12.0 0.11.0 master" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@sandeep-krishnamurthy @kevinthesun
Should this change be made now or after the RC is final?
NEWS.md
Outdated
## 0.12.0 | ||
### New Features - Sparse Tensor Support | ||
- Added comprehensive support for sparse matrices. See help on `mx.sym.sparse` and `mx.nd.sparse` for more info. | ||
- Limited support for fancy indexing. x[idx_arr0, idx_arr1, ..., idx_arrn] is now supported. Full support coming soon in next release. Checkout master to get a preview. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fancy indexing is for normal NDArray
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@eric-haibin-lin Can you please edit the line and let me know how you want to rephrase it?I will update the PR
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am just saying that this doesn't belong to sparse tensor
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Moved to other features. Done.
NEWS.md
Outdated
### Performance | ||
- Enabled JIT compilation. Autograd and Gluon hybridize now use less memory and has faster speed. Performance is almost the same with old symbolic style code. | ||
- Full support for NVidia Volta GPU Architecture and Cuda 9. Training is up to 3.5x faster than Pascal when using float16. | ||
### - API Changes |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why is there a dash before API Changes?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch 👍
NEWS.md
Outdated
@@ -1,11 +1,38 @@ | |||
MXNet Change Log | |||
================ | |||
## 0.12.0 | |||
### New Features - Sparse Tensor Support | |||
- Added comprehensive support for sparse matrices. See help on `mx.sym.sparse` and `mx.nd.sparse` for more info. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Added limited cpu support for two sparse formats for
Symbol
andNDArray
-CSRNDArray
andRowSparseNDArray
- Added a sparse dot product operator and many element-wise sparse operators
- Added a data iterator for sparse data input -
LibSVMIter
- Added three optimizers for sparse gradient updates:
Ftrl
,SGD
andAdam
- Added
push
androw_sparse_pull
withRowSparseNDArray
in distributed kvstore
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should I replace the existing point or add subpoints?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah please replace the existing point. comprehensive support
is misleading since we dont have gpu support. Thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, all the formats are gone.. :(
NEWS.md
Outdated
- Added a data iterator for sparse data input - LibSVMIter | ||
- Added three optimizers for sparse gradient updates: Ftrl, SGD and Adam | ||
- Added push and row_sparse_pull with RowSparseNDArray in distributed kvstore | ||
- For more information see [full release notes](https://cwiki.apache.org/confluence/display/MXNET/MXNet+0.12.0+Release+Notes) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
-
- For more information see full release notes
Why is this put under sparse? Should we move it to the end?
- For more information see full release notes
* Preparing for 0.12.0 Release * Correcting Syntax in NEWS.md * Adding to NEWS.md * Adding to NEWS.md * build_all_version.sh should be updated only after RC passes * Edits * Changes to NEWS.md * formatting
* Preparing for 0.12.0 Release * Correcting Syntax in NEWS.md * Adding to NEWS.md * Adding to NEWS.md * build_all_version.sh should be updated only after RC passes * Edits * Changes to NEWS.md * formatting
* Preparing for 0.12.0 Release * Correcting Syntax in NEWS.md * Adding to NEWS.md * Adding to NEWS.md * build_all_version.sh should be updated only after RC passes * Edits * Changes to NEWS.md * formatting
Description
Checklist
Essentials
make lint
)Changes
Comments
Coming up in In another PR before tagging the RC -
Updated README.md
Updated NEWS.md if needed