Skip to content

Commit 5848881

Browse files
committed
fix errors in markdown
1 parent 78209b0 commit 5848881

File tree

1 file changed

+11
-11
lines changed

1 file changed

+11
-11
lines changed

docs/ml-features.md

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -63,7 +63,7 @@ the [IDF Python docs](api/python/pyspark.ml.html#pyspark.ml.feature.IDF) for mor
6363
`Word2VecModel`. The model maps each word to a unique fixed-size vector. The `Word2VecModel`
6464
transforms each document into a vector using the average of all words in the document; this vector
6565
can then be used for as features for prediction, document similarity calculations, etc.
66-
Please refer to the [MLlib user guide on Word2Vec](mllib-feature-extraction.html#Word2Vec) for more
66+
Please refer to the [MLlib user guide on Word2Vec](mllib-feature-extraction.html#word2Vec) for more
6767
details.
6868

6969
In the following code segment, we start with a set of documents, each of which is represented as a sequence of words. For each document, we transform it into a feature vector. This feature vector could then be passed to a learning algorithm.
@@ -411,7 +411,7 @@ for more details on the API.
411411
Refer to the [DCT Java docs](api/java/org/apache/spark/ml/feature/DCT.html)
412412
for more details on the API.
413413

414-
{% include_example java/org/apache/spark/examples/ml/JavaDCTExample.java %}}
414+
{% include_example java/org/apache/spark/examples/ml/JavaDCTExample.java %}
415415
</div>
416416
</div>
417417

@@ -564,23 +564,23 @@ for more details on the API.
564564
The following example demonstrates how to load a dataset in libsvm format and then normalize each row to have unit $L^2$ norm and unit $L^\infty$ norm.
565565

566566
<div class="codetabs">
567-
<div data-lang="scala">
567+
<div data-lang="scala" markdown="1">
568568

569569
Refer to the [Normalizer Scala docs](api/scala/index.html#org.apache.spark.ml.feature.Normalizer)
570570
for more details on the API.
571571

572572
{% include_example scala/org/apache/spark/examples/ml/NormalizerExample.scala %}
573573
</div>
574574

575-
<div data-lang="java">
575+
<div data-lang="java" markdown="1">
576576

577577
Refer to the [Normalizer Java docs](api/java/org/apache/spark/ml/feature/Normalizer.html)
578578
for more details on the API.
579579

580580
{% include_example java/org/apache/spark/examples/ml/JavaNormalizerExample.java %}
581581
</div>
582582

583-
<div data-lang="python">
583+
<div data-lang="python" markdown="1">
584584

585585
Refer to the [Normalizer Python docs](api/python/pyspark.ml.html#pyspark.ml.feature.Normalizer)
586586
for more details on the API.
@@ -604,23 +604,23 @@ Note that if the standard deviation of a feature is zero, it will return default
604604
The following example demonstrates how to load a dataset in libsvm format and then normalize each feature to have unit standard deviation.
605605

606606
<div class="codetabs">
607-
<div data-lang="scala">
607+
<div data-lang="scala" markdown="1">
608608

609609
Refer to the [StandardScaler Scala docs](api/scala/index.html#org.apache.spark.ml.feature.StandardScaler)
610610
for more details on the API.
611611

612612
{% include_example scala/org/apache/spark/examples/ml/StandardScalerExample.scala %}
613613
</div>
614614

615-
<div data-lang="java">
615+
<div data-lang="java" markdown="1">
616616

617617
Refer to the [StandardScaler Java docs](api/java/org/apache/spark/ml/feature/StandardScaler.html)
618618
for more details on the API.
619619

620620
{% include_example java/org/apache/spark/examples/ml/JavaStandardScalerExample.java %}
621621
</div>
622622

623-
<div data-lang="python">
623+
<div data-lang="python" markdown="1">
624624

625625
Refer to the [StandardScaler Python docs](api/python/pyspark.ml.html#pyspark.ml.feature.StandardScaler)
626626
for more details on the API.
@@ -683,23 +683,23 @@ More details can be found in the API docs for [Bucketizer](api/scala/index.html#
683683
The following example demonstrates how to bucketize a column of `Double`s into another index-wised column.
684684

685685
<div class="codetabs">
686-
<div data-lang="scala">
686+
<div data-lang="scala" markdown="1">
687687

688688
Refer to the [Bucketizer Scala docs](api/scala/index.html#org.apache.spark.ml.feature.Bucketizer)
689689
for more details on the API.
690690

691691
{% include_example scala/org/apache/spark/examples/ml/BucketizerExample.scala %}
692692
</div>
693693

694-
<div data-lang="java">
694+
<div data-lang="java" markdown="1">
695695

696696
Refer to the [Bucketizer Java docs](api/java/org/apache/spark/ml/feature/Bucketizer.html)
697697
for more details on the API.
698698

699699
{% include_example java/org/apache/spark/examples/ml/JavaBucketizerExample.java %}
700700
</div>
701701

702-
<div data-lang="python">
702+
<div data-lang="python" markdown="1">
703703

704704
Refer to the [Bucketizer Python docs](api/python/pyspark.ml.html#pyspark.ml.feature.Bucketizer)
705705
for more details on the API.

0 commit comments

Comments
 (0)