Skip to content

Commit a6b561b

Browse files
committed
Fixing doc generation errors
1 parent 8e44fc6 commit a6b561b

File tree

1 file changed

+6
-6
lines changed

1 file changed

+6
-6
lines changed

docs/src/reference/asciidoc/core/spark.adoc

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -860,13 +860,13 @@ jssc.start() <4>
860860
<4> launch stream job
861861

862862
[float]
863-
[[spark-write-dyn]]
863+
[[spark-streaming-write-dyn]]
864864
==== Writing to dynamic/multi-resources
865865

866866
For cases when the data being written to {es} needs to be indexed under different buckets (based on the data content) one can use the `es.resource.write` field which accepts a pattern that is resolved from the document content, at runtime. Following the aforementioned <<cfg-multi-writes,media example>>, one could configure it as follows:
867867

868868
[float]
869-
[[spark-write-dyn-scala]]
869+
[[spark-streaming-write-dyn-scala]]
870870
===== Scala
871871

872872
[source,scala]
@@ -887,7 +887,7 @@ ssc.start()
887887
For each document/object about to be written, {eh} will extract the +media_type+ field and use its value to determine the target resource.
888888

889889
[float]
890-
[[spark-write-dyn-java]]
890+
[[spark-streaming-write-dyn-java]]
891891
===== Java
892892

893893
As expected, things in Java are strikingly similar:
@@ -912,7 +912,7 @@ jssc.start();
912912
<1> Save each object based on its resource pattern, +media_type+ in this example
913913

914914
[float]
915-
[[spark-write-meta]]
915+
[[spark-streaming-write-meta]]
916916
==== Handling document metadata
917917

918918
{es} allows each document to have its own http://www.elasticsearch.org/guide/en/elasticsearch/guide/current/\_document\_metadata.html[metadata]. As explained above, through the various <<cfg-mapping, mapping>> options one can customize these parameters so that their values are extracted from their belonging document. Further more, one can even include/exclude what parts of the data are sent back to {es}. In Spark, {eh} extends this functionality allowing metadata to be supplied _outside_ the document itself through the use of http://spark.apache.org/docs/latest/programming-guide.html#working-with-key-value-pairs[_pair_ ++RDD++s].
@@ -924,7 +924,7 @@ Thus a +DStream+'s keys can be a +Map+ containing the +Metadata+ for each docume
924924
This sounds more complicated than it is, so let us see some examples.
925925

926926
[float]
927-
[[spark-write-meta-scala]]
927+
[[spark-streaming-write-meta-scala]]
928928
===== Scala
929929

930930
Pair ++DStreams++s, or simply put ++DStreams++s with the signature +DStream[(K,V)]+ can take advantage of the +saveToEsWithMeta+ methods that are available either through the _implicit_ import of +org.elasticsearch.spark.streaming+ package or +EsSparkStreaming+ object.
@@ -990,7 +990,7 @@ ssc.start()
990990
<7> The +DStream+ is configured to index the data accordingly using the +saveToEsWithMeta+ method
991991

992992
[float]
993-
[[spark-write-meta-java]]
993+
[[spark-streaming-write-meta-java]]
994994
===== Java
995995

996996
In a similar fashion, on the Java side, +JavaEsSparkStreaming+ provides +saveToEsWithMeta+ methods that are applied to +JavaPairDStream+ (the equivalent in Java of +DStream[(K,V)]+).

0 commit comments

Comments
 (0)