Skip to content

Commit

Permalink
Fix code blocks indents in .md files (intel-analytics#2978)
Browse files Browse the repository at this point in the history
* Fix code blocks indents in .md files

Previously a lot of the code blocks in markdown files were horribly indented with bad white spaces in the beginning of lines. Users can't just select, copy, paste, and run (in the case of python). I have fixed all these, so there is no longer any code block with bad white space at the beginning of the lines.
It would be nice if you could try to make sure in future commits that all code blocks are properly indented inside and have the right amount of white space in the beginning!

* Fix small style issue

* Fix indents

* Fix indent and add \ for multiline commands

Change indent from 3 spaces to 4, and add "\" for multiline bash commands

Co-authored-by: Yifan Zhu <fanzhuyifan@gmail.com>
  • Loading branch information
fanzhuyifan and fanzhuyifan committed Nov 13, 2020
1 parent fe11647 commit 77ac45a
Show file tree
Hide file tree
Showing 15 changed files with 149 additions and 154 deletions.
2 changes: 0 additions & 2 deletions apps/recommendation-ncf/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,14 +28,12 @@ Run the following command for Spark local mode (MASTER=local[*]) or cluster mode
export SPARK_HOME=the root directory of Spark
export ANALYTICS_ZOO_HOME=the folder where you extract the downloaded Analytics Zoo zip package

```
${ANALYTICS_ZOO_HOME}/bin/jupyter-with-zoo.sh \
--master ${MASTER} \
--driver-cores 4 \
--driver-memory 22g \
--total-executor-cores 4 \
--executor-cores 4 \
--executor-memory 22g
```

See [here](https://analytics-zoo.github.io/master/#PythonUserGuide/run/#run-without-pip-install) for more running guidance without pip install.
2 changes: 0 additions & 2 deletions apps/recommendation-wide-n-deep/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,14 +29,12 @@ Run the following command for Spark local mode (MASTER=local[*]) or cluster mode
export SPARK_HOME=the root directory of Spark
export ANALYTICS_ZOO_HOME=the folder where you extract the downloaded Analytics Zoo zip package

```
${ANALYTICS_ZOO_HOME}/bin/jupyter-with-zoo.sh \
--master ${MASTER} \
--driver-cores 4 \
--driver-memory 22g \
--total-executor-cores 4 \
--executor-cores 4 \
--executor-memory 22g
```

See [here](https://analytics-zoo.github.io/master/#PythonUserGuide/run/#run-without-pip-install) for more running guidance without pip install.
62 changes: 31 additions & 31 deletions docker/hyperzoo/submit-examples-on-k8s.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ Look into [python example](https://github.com/intel-analytics/analytics-zoo/tree
#### anomalydetection

```bash
${SPARK_HOME}/bin/spark-submit \
${SPARK_HOME}/bin/spark-submit \
--master ${RUNTIME_SPARK_MASTER} \
--deploy-mode cluster \
--conf spark.kubernetes.authenticate.driver.serviceAccountName=${RUNTIME_K8S_SERVICE_ACCOUNT} \
Expand Down Expand Up @@ -57,7 +57,7 @@ Look into [python example](https://github.com/intel-analytics/analytics-zoo/tree
#### attention

```bash
${SPARK_HOME}/bin/spark-submit \
${SPARK_HOME}/bin/spark-submit \
--master ${RUNTIME_SPARK_MASTER} \
--deploy-mode cluster \
--conf spark.kubernetes.authenticate.driver.serviceAccountName=${RUNTIME_K8S_SERVICE_ACCOUNT} \
Expand Down Expand Up @@ -89,7 +89,7 @@ Look into [python example](https://github.com/intel-analytics/analytics-zoo/tree
###### autograd custom

```bash
${SPARK_HOME}/bin/spark-submit \
${SPARK_HOME}/bin/spark-submit \
--master ${RUNTIME_SPARK_MASTER} \
--deploy-mode cluster \
--conf spark.kubernetes.authenticate.driver.serviceAccountName=${RUNTIME_K8S_SERVICE_ACCOUNT} \
Expand Down Expand Up @@ -120,7 +120,7 @@ Look into [python example](https://github.com/intel-analytics/analytics-zoo/tree
###### autograd customloss

```bash
${SPARK_HOME}/bin/spark-submit \
${SPARK_HOME}/bin/spark-submit \
--master ${RUNTIME_SPARK_MASTER} \
--deploy-mode cluster \
--conf spark.kubernetes.authenticate.driver.serviceAccountName=${RUNTIME_K8S_SERVICE_ACCOUNT} \
Expand Down Expand Up @@ -150,7 +150,7 @@ Look into [python example](https://github.com/intel-analytics/analytics-zoo/tree
#### image classification

```bash
${SPARK_HOME}/bin/spark-submit \
${SPARK_HOME}/bin/spark-submit \
--master ${RUNTIME_SPARK_MASTER} \
--deploy-mode cluster \
--conf spark.kubernetes.authenticate.driver.serviceAccountName=${RUNTIME_K8S_SERVICE_ACCOUNT} \
Expand Down Expand Up @@ -183,7 +183,7 @@ Look into [python example](https://github.com/intel-analytics/analytics-zoo/tree
#### inception

```bash
${SPARK_HOME}/bin/spark-submit \
${SPARK_HOME}/bin/spark-submit \
--master ${RUNTIME_SPARK_MASTER} \
--deploy-mode cluster \
--conf spark.kubernetes.authenticate.driver.serviceAccountName=${RUNTIME_K8S_SERVICE_ACCOUNT} \
Expand Down Expand Up @@ -218,7 +218,7 @@ Look into [python example](https://github.com/intel-analytics/analytics-zoo/tree
###### nnframes image finetuning

```bash
${SPARK_HOME}/bin/spark-submit \
${SPARK_HOME}/bin/spark-submit \
--master ${RUNTIME_SPARK_MASTER} \
--deploy-mode cluster \
--conf spark.kubernetes.authenticate.driver.serviceAccountName=${RUNTIME_K8S_SERVICE_ACCOUNT} \
Expand Down Expand Up @@ -251,7 +251,7 @@ Look into [python example](https://github.com/intel-analytics/analytics-zoo/tree
###### nnframes image inference

```bash
${SPARK_HOME}/bin/spark-submit \
${SPARK_HOME}/bin/spark-submit \
--master ${RUNTIME_SPARK_MASTER} \
--deploy-mode cluster \
--conf spark.kubernetes.authenticate.driver.serviceAccountName=${RUNTIME_K8S_SERVICE_ACCOUNT} \
Expand Down Expand Up @@ -284,7 +284,7 @@ Look into [python example](https://github.com/intel-analytics/analytics-zoo/tree
###### nnframes image transfer learning

```bash
${SPARK_HOME}/bin/spark-submit \
${SPARK_HOME}/bin/spark-submit \
--master ${RUNTIME_SPARK_MASTER} \
--deploy-mode cluster \
--conf spark.kubernetes.authenticate.driver.serviceAccountName=${RUNTIME_K8S_SERVICE_ACCOUNT} \
Expand Down Expand Up @@ -317,7 +317,7 @@ Look into [python example](https://github.com/intel-analytics/analytics-zoo/tree
#### object-detection

```bash
${SPARK_HOME}/bin/spark-submit \
${SPARK_HOME}/bin/spark-submit \
--master ${RUNTIME_SPARK_MASTER} \
--deploy-mode cluster \
--conf spark.kubernetes.authenticate.driver.serviceAccountName=${RUNTIME_K8S_SERVICE_ACCOUNT} \
Expand Down Expand Up @@ -348,7 +348,7 @@ Look into [python example](https://github.com/intel-analytics/analytics-zoo/tree
#### openvino

```bash
${SPARK_HOME}/bin/spark-submit \
${SPARK_HOME}/bin/spark-submit \
--master ${RUNTIME_SPARK_MASTER} \
--deploy-mode cluster \
--conf spark.kubernetes.authenticate.driver.serviceAccountName=${RUNTIME_K8S_SERVICE_ACCOUNT} \
Expand Down Expand Up @@ -382,7 +382,7 @@ Look into [python example](https://github.com/intel-analytics/analytics-zoo/tree
###### pytorch inference

```bash
${SPARK_HOME}/bin/spark-submit \
${SPARK_HOME}/bin/spark-submit \
--master ${RUNTIME_SPARK_MASTER} \
--deploy-mode cluster \
--conf spark.kubernetes.authenticate.driver.serviceAccountName=${RUNTIME_K8S_SERVICE_ACCOUNT} \
Expand Down Expand Up @@ -412,7 +412,7 @@ Look into [python example](https://github.com/intel-analytics/analytics-zoo/tree
#### qranker

```bash
${SPARK_HOME}/bin/spark-submit \
${SPARK_HOME}/bin/spark-submit \
--master ${RUNTIME_SPARK_MASTER} \
--deploy-mode cluster \
--conf spark.kubernetes.authenticate.driver.serviceAccountName=${RUNTIME_K8S_SERVICE_ACCOUNT} \
Expand Down Expand Up @@ -448,7 +448,7 @@ Look into [python example](https://github.com/intel-analytics/analytics-zoo/tree
###### tensorflow tfnet

```bash
${SPARK_HOME}/bin/spark-submit \
${SPARK_HOME}/bin/spark-submit \
--master ${RUNTIME_SPARK_MASTER} \
--deploy-mode cluster \
--conf spark.kubernetes.authenticate.driver.serviceAccountName=${RUNTIME_K8S_SERVICE_ACCOUNT} \
Expand Down Expand Up @@ -510,7 +510,7 @@ ${SPARK_HOME}/bin/spark-submit \
###### tensorflow TFPark tf_optimizer train

```bash
${SPARK_HOME}/bin/spark-submit \
${SPARK_HOME}/bin/spark-submit \
--master ${RUNTIME_SPARK_MASTER} \
--deploy-mode cluster \
--conf spark.kubernetes.authenticate.driver.serviceAccountName=${RUNTIME_K8S_SERVICE_ACCOUNT} \
Expand Down Expand Up @@ -540,7 +540,7 @@ ${SPARK_HOME}/bin/spark-submit \
###### tensorflow tfpark tf_optimizer evaluate

```bash
${SPARK_HOME}/bin/spark-submit \
${SPARK_HOME}/bin/spark-submit \
--master ${RUNTIME_SPARK_MASTER} \
--deploy-mode cluster \
--conf spark.kubernetes.authenticate.driver.serviceAccountName=${RUNTIME_K8S_SERVICE_ACCOUNT} \
Expand Down Expand Up @@ -570,7 +570,7 @@ ${SPARK_HOME}/bin/spark-submit \
#### text classification

```bash
${SPARK_HOME}/bin/spark-submit \
${SPARK_HOME}/bin/spark-submit \
--master ${RUNTIME_SPARK_MASTER} \
--deploy-mode cluster \
--conf spark.kubernetes.authenticate.driver.serviceAccountName=${RUNTIME_K8S_SERVICE_ACCOUNT} \
Expand Down Expand Up @@ -603,7 +603,7 @@ ${SPARK_HOME}/bin/spark-submit \
#### vnniopenvino

```bash
${SPARK_HOME}/bin/spark-submit \
${SPARK_HOME}/bin/spark-submit \
--master ${RUNTIME_SPARK_MASTER} \
--deploy-mode cluster \
--conf spark.kubernetes.authenticate.driver.serviceAccountName=${RUNTIME_K8S_SERVICE_ACCOUNT} \
Expand Down Expand Up @@ -684,7 +684,7 @@ ${SPARK_HOME}/bin/spark-submit \
#### chatbot

```bash
${SPARK_HOME}/bin/spark-submit \
${SPARK_HOME}/bin/spark-submit \
--master ${RUNTIME_SPARK_MASTER} \
--deploy-mode cluster \
--conf spark.kubernetes.authenticate.driver.serviceAccountName=${RUNTIME_K8S_SERVICE_ACCOUNT} \
Expand Down Expand Up @@ -716,7 +716,7 @@ ${SPARK_HOME}/bin/spark-submit \
#### imageClassification

```bash
${SPARK_HOME}/bin/spark-submit \
${SPARK_HOME}/bin/spark-submit \
--master ${RUNTIME_SPARK_MASTER} \
--deploy-mode cluster \
--conf spark.kubernetes.authenticate.driver.serviceAccountName=${RUNTIME_K8S_SERVICE_ACCOUNT} \
Expand Down Expand Up @@ -750,7 +750,7 @@ ${SPARK_HOME}/bin/spark-submit \
#### inception

```bash
${SPARK_HOME}/bin/spark-submit \
${SPARK_HOME}/bin/spark-submit \
--master ${RUNTIME_SPARK_MASTER} \
--deploy-mode cluster \
--conf spark.kubernetes.authenticate.driver.serviceAccountName=${RUNTIME_K8S_SERVICE_ACCOUNT} \
Expand Down Expand Up @@ -786,7 +786,7 @@ ${SPARK_HOME}/bin/spark-submit \
###### nnframes finetune

```bash
${SPARK_HOME}/bin/spark-submit \
${SPARK_HOME}/bin/spark-submit \
--master ${RUNTIME_SPARK_MASTER} \
--deploy-mode cluster \
--conf spark.kubernetes.authenticate.driver.serviceAccountName=${RUNTIME_K8S_SERVICE_ACCOUNT} \
Expand Down Expand Up @@ -893,7 +893,7 @@ ${SPARK_HOME}/bin/spark-submit \
#### objectdetection

```bash
${SPARK_HOME}/bin/spark-submit \
${SPARK_HOME}/bin/spark-submit \
--master ${RUNTIME_SPARK_MASTER} \
--deploy-mode cluster \
--conf spark.kubernetes.authenticate.driver.serviceAccountName=${RUNTIME_K8S_SERVICE_ACCOUNT} \
Expand Down Expand Up @@ -925,7 +925,7 @@ ${SPARK_HOME}/bin/spark-submit \
#### qaranker

```bash
${SPARK_HOME}/bin/spark-submit \
${SPARK_HOME}/bin/spark-submit \
--master ${RUNTIME_SPARK_MASTER} \
--deploy-mode cluster \
--conf spark.kubernetes.authenticate.driver.serviceAccountName=${RUNTIME_K8S_SERVICE_ACCOUNT} \
Expand Down Expand Up @@ -961,7 +961,7 @@ ${SPARK_HOME}/bin/spark-submit \
###### recommendation wideAndDeepExample

```bash
${SPARK_HOME}/bin/spark-submit \
${SPARK_HOME}/bin/spark-submit \
--master ${RUNTIME_SPARK_MASTER} \
--deploy-mode cluster \
--conf spark.kubernetes.authenticate.driver.serviceAccountName=${RUNTIME_K8S_SERVICE_ACCOUNT} \
Expand Down Expand Up @@ -1027,7 +1027,7 @@ ${SPARK_HOME}/bin/spark-submit \
#### resnet

```bash
${SPARK_HOME}/bin/spark-submit \
${SPARK_HOME}/bin/spark-submit \
--master ${RUNTIME_SPARK_MASTER} \
--deploy-mode cluster \
--conf spark.kubernetes.authenticate.driver.serviceAccountName=${RUNTIME_K8S_SERVICE_ACCOUNT} \
Expand Down Expand Up @@ -1061,7 +1061,7 @@ ${SPARK_HOME}/bin/spark-submit \
#### tensorflow/tfnet

```bash
${SPARK_HOME}/bin/spark-submit \
${SPARK_HOME}/bin/spark-submit \
--master ${RUNTIME_SPARK_MASTER} \
--deploy-mode cluster \
--conf spark.kubernetes.authenticate.driver.serviceAccountName=${RUNTIME_K8S_SERVICE_ACCOUNT} \
Expand Down Expand Up @@ -1095,7 +1095,7 @@ ${SPARK_HOME}/bin/spark-submit \
#### textClassification

```bash
${SPARK_HOME}/bin/spark-submit \
${SPARK_HOME}/bin/spark-submit \
--master ${RUNTIME_SPARK_MASTER} \
--deploy-mode cluster \
--conf spark.kubernetes.authenticate.driver.serviceAccountName=${RUNTIME_K8S_SERVICE_ACCOUNT} \
Expand Down Expand Up @@ -1130,7 +1130,7 @@ ${SPARK_HOME}/bin/spark-submit \
###### vnni perf

```bash
${SPARK_HOME}/bin/spark-submit \
${SPARK_HOME}/bin/spark-submit \
--master ${RUNTIME_SPARK_MASTER} \
--deploy-mode cluster \
--conf spark.kubernetes.authenticate.driver.serviceAccountName=${RUNTIME_K8S_SERVICE_ACCOUNT} \
Expand Down Expand Up @@ -1231,7 +1231,7 @@ ${SPARK_HOME}/bin/spark-submit \
###### vnni BigDL ImageNet evaluation

```bash
${SPARK_HOME}/bin/spark-submit \
${SPARK_HOME}/bin/spark-submit \
--master ${RUNTIME_SPARK_MASTER} \
--deploy-mode cluster \
--conf spark.kubernetes.authenticate.driver.serviceAccountName=${RUNTIME_K8S_SERVICE_ACCOUNT} \
Expand Down Expand Up @@ -1264,7 +1264,7 @@ ${SPARK_HOME}/bin/spark-submit \
###### **vnni BigDL predict**

```bash
${SPARK_HOME}/bin/spark-submit \
${SPARK_HOME}/bin/spark-submit \
--master ${RUNTIME_SPARK_MASTER} \
--deploy-mode cluster \
--conf spark.kubernetes.authenticate.driver.serviceAccountName=${RUNTIME_K8S_SERVICE_ACCOUNT} \
Expand Down
4 changes: 2 additions & 2 deletions docs/docs/APIGuide/FeatureEngineering/image.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ Create DistributedImageSet from rdd of ImageFeature

* data: array of ImageFeature
```
def read(path: String, sc: SparkContext = null, minPartitions: Int = 1, resizeH: Int = -1, resizeW: Int = -1): ImageSet
def read(path: String, sc: SparkContext = null, minPartitions: Int = 1, resizeH: Int = -1, resizeW: Int = -1): ImageSet
```
Read images as Image Set.
If sc is defined, read image as DistributedImageSet from local file system or HDFS.
Expand Down Expand Up @@ -345,4 +345,4 @@ transformed_image = affine(image_set)
* affine_mat: numpy array in 3x3 shape.Define affine transformation from dst to src.
* translation: numpy array in 3 dimension.Default value is np.zero(3). Define translation in each axis.
* clamp_mode: str, default value is "clamp". Define how to handle interpolation off the input image.
* pad_val: float, default is 0.0. Define padding value when clampMode="padding". Setting this value when clampMode="clamp" will cause an error.
* pad_val: float, default is 0.0. Define padding value when clampMode="padding". Setting this value when clampMode="clamp" will cause an error.
22 changes: 11 additions & 11 deletions docs/docs/APIGuide/PipelineAPI/nnframes.md
Original file line number Diff line number Diff line change
Expand Up @@ -438,12 +438,12 @@ into DataFrame.

Scala:
```scala
val imageDF = NNImageReader.readImages(imageDirectory, sc)
val imageDF = NNImageReader.readImages(imageDirectory, sc)
```

Python:
```python
image_frame = NNImageReader.readImages(image_path, self.sc)
image_frame = NNImageReader.readImages(image_path, self.sc)
```

The output DataFrame contains a sinlge column named "image". The schema of "image" column can be
Expand All @@ -453,15 +453,15 @@ Row(origin, height, width, num of channels, mode, data), where origin contains t
and `data` holds the original file bytes for the image file. `mode` represents the OpenCV-compatible
type: CV_8UC3, CV_8UC1 in most cases.
```scala
val byteSchema = StructType(
StructField("origin", StringType, true) ::
StructField("height", IntegerType, false) ::
StructField("width", IntegerType, false) ::
StructField("nChannels", IntegerType, false) ::
// OpenCV-compatible type: CV_8UC3, CV_32FC3 in most cases
StructField("mode", IntegerType, false) ::
// Bytes in OpenCV-compatible order: row-wise BGR in most cases
StructField("data", BinaryType, false) :: Nil)
val byteSchema = StructType(
StructField("origin", StringType, true) ::
StructField("height", IntegerType, false) ::
StructField("width", IntegerType, false) ::
StructField("nChannels", IntegerType, false) ::
// OpenCV-compatible type: CV_8UC3, CV_32FC3 in most cases
StructField("mode", IntegerType, false) ::
// Bytes in OpenCV-compatible order: row-wise BGR in most cases
StructField("data", BinaryType, false) :: Nil)
```

After loading the image, user can compose the preprocess steps with the `Preprocessing` defined
Expand Down
3 changes: 1 addition & 2 deletions docs/docs/ClusterServingGuide/QuickStart_CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ model:
使用同步API,需要传入符合模型格式的输入,**并且注意数据类型为float,即末尾加上小数点代表float格式**,样例如下,

假设Redis启动host为"localhost",port为"6379",[同步服务](#安装)启动url为"127.0.0.1:10020",模型输入为一维,有两个常数,则推理脚本代码如下
```

input_api = InputQueue(host="localhost", port="6379", sync=True, frontend_url="http://127.0.0.1:10020")
s = '''{
"instances": [
Expand All @@ -58,4 +58,3 @@ model:
}'''
a = input_api.predict(s)
print(a)
```
Loading

0 comments on commit 77ac45a

Please sign in to comment.