Skip to content

Commit

Permalink
update docs && fix typos (apache#4624)
Browse files Browse the repository at this point in the history
* update docs && fix typos

* fix
  • Loading branch information
tornadomeet authored and piiswrong committed Jan 10, 2017
1 parent 093844e commit d8e64a6
Show file tree
Hide file tree
Showing 6 changed files with 8 additions and 5 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ Ask Questions

License
-------
© Contributors, 2015-2016. Licensed under an [Apache-2.0](https://github.com/dmlc/mxnet/blob/master/LICENSE) license.
© Contributors, 2015-2017. Licensed under an [Apache-2.0](https://github.com/dmlc/mxnet/blob/master/LICENSE) license.

Reference Paper
---------------
Expand Down
2 changes: 1 addition & 1 deletion docs/how_to/env_var.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ Typically, you wouldn't need to change these settings, but they are listed here
* MXNET_BACKWARD_DO_MIRROR (default=0)
- whether do `mirror` during training for saving device memory.
- when set to `1`, then during forward propagation, graph exector will `mirror` some layer's feature map and drop others, but it will re-compute this dropped feature maps when needed. `MXNET_BACKWARD_DO_MIRROR=1` will save 30%~50% of device memory, but retains about 95% of running speed.
- one extension of `mirror` in MXNet is called [memonger technology](https://arxiv.org/abs/1604.06174), it will save O(sqrt(N)) memory at 75% running speed.
- one extension of `mirror` in MXNet is called [memonger technology](https://arxiv.org/abs/1604.06174), it will only use O(sqrt(N)) memory at 75% running speed.

## Control the profiler

Expand Down
3 changes: 3 additions & 0 deletions docs/how_to/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,9 @@ In particular, the popular task of using a ConvNet to figure out what is in an i
* [How to improve MXNet performance](http://mxnet.io/how_to/perf.html)
*Explains how to improve MXNet performance by using the recommended data format, storage locations, batch sizes, libraries, and parameters, and more.*

* [How to use nnpack improve cpu performance of MXNet](http://mxnet.io/how_to/nnpack.html)
*Explains how to improve cpu performance of MXNet by using [nnpack](https://github.com/Maratyszcza/NNPACK). currently, nnpack support convolution, max-pooling, fully-connected operator.*

* [How to use MXNet within a Matlab environment](https://github.com/dmlc/mxnet/tree/master/matlab)
*Provides the commands to load a model and data, get predictions, and do feature extraction in Matlab using the MXNet library. It includes an implementation difference between the two that can cause issues, and some basic troubleshooting.*

Expand Down
1 change: 1 addition & 0 deletions docs/how_to/nnpack.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ MXNet(nnvm branch) has integrate NNPACK for forward propagation(only inference)
### Conditions
The underlying implementation of NNPACK utilize some other acceleration methods, such as [fft](https://arxiv.org/abs/1312.5851), [winograd](https://arxiv.org/abs/1509.09308), but these algorithms work better on some specical `batch size`, `kernel size`, `stride` etc., so not all convolution/max-pooling/fully-connected can be powed by NNPACK. If some conditions are not met, it will change to the default implementation with MXNet automatically.

nnpack only support Linux or OS X host system, that is to say, Windows is not supported at present.
The following table will tell you which satisfaction will NNPACK work.

| operation | conditions |
Expand Down
3 changes: 1 addition & 2 deletions docs/zh/mxnet-dep-engine-implemention.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,8 +48,7 @@ Op实际上是用来代表计算过程以及它依赖的var,先来看看它的

# Var<a id="orgheadline9"></a>

var可以看做是一个tag,用来标示每一个对象的,这样Op对对象的依赖可以简化成对var的
依赖,这样就可以构建出一个不依赖于具体的对象的通用的依赖引擎。Var是依赖引擎的关键。
var可以看做是一个tag,用来标示每一个对象的,这样Op对对象的依赖可以简化成对var的依赖,从而可以构建出一个不依赖于具体对象的通用的依赖引擎。Var是依赖引擎的关键。

## 类图<a id="orgheadline3"></a>

Expand Down
2 changes: 1 addition & 1 deletion src/io/iter_image_recordio.cc
Original file line number Diff line number Diff line change
Expand Up @@ -161,7 +161,7 @@ class ImageRecordIOParser {
inline bool ParseNext(std::vector<InstVector<DType>> *out);

private:
// magic nyumber to see prng
// magic number to see prng
static const int kRandMagic = 111;
/*! \brief parameters */
ImageRecParserParam param_;
Expand Down

0 comments on commit d8e64a6

Please sign in to comment.