Skip to content

Commit 719973b

Browse files
raelawangrxin
authored andcommitted
[SPARK-13274] Fix Aggregator Links on GroupedDataset Scala API
Update Aggregator links to point to #org.apache.spark.sql.expressions.Aggregator Author: raela <raela@databricks.com> Closes #11158 from raelawang/master.
1 parent 0902e20 commit 719973b

File tree

1 file changed

+8
-4
lines changed

1 file changed

+8
-4
lines changed

sql/core/src/main/scala/org/apache/spark/sql/GroupedDataset.scala

Lines changed: 8 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -101,7 +101,8 @@ class GroupedDataset[K, V] private[sql](
101101
*
102102
* This function does not support partial aggregation, and as a result requires shuffling all
103103
* the data in the [[Dataset]]. If an application intends to perform an aggregation over each
104-
* key, it is best to use the reduce function or an [[Aggregator]].
104+
* key, it is best to use the reduce function or an
105+
* [[org.apache.spark.sql.expressions#Aggregator Aggregator]].
105106
*
106107
* Internally, the implementation will spill to disk if any given group is too large to fit into
107108
* memory. However, users must take care to avoid materializing the whole iterator for a group
@@ -128,7 +129,8 @@ class GroupedDataset[K, V] private[sql](
128129
*
129130
* This function does not support partial aggregation, and as a result requires shuffling all
130131
* the data in the [[Dataset]]. If an application intends to perform an aggregation over each
131-
* key, it is best to use the reduce function or an [[Aggregator]].
132+
* key, it is best to use the reduce function or an
133+
* [[org.apache.spark.sql.expressions#Aggregator Aggregator]].
132134
*
133135
* Internally, the implementation will spill to disk if any given group is too large to fit into
134136
* memory. However, users must take care to avoid materializing the whole iterator for a group
@@ -148,7 +150,8 @@ class GroupedDataset[K, V] private[sql](
148150
*
149151
* This function does not support partial aggregation, and as a result requires shuffling all
150152
* the data in the [[Dataset]]. If an application intends to perform an aggregation over each
151-
* key, it is best to use the reduce function or an [[Aggregator]].
153+
* key, it is best to use the reduce function or an
154+
* [[org.apache.spark.sql.expressions#Aggregator Aggregator]].
152155
*
153156
* Internally, the implementation will spill to disk if any given group is too large to fit into
154157
* memory. However, users must take care to avoid materializing the whole iterator for a group
@@ -169,7 +172,8 @@ class GroupedDataset[K, V] private[sql](
169172
*
170173
* This function does not support partial aggregation, and as a result requires shuffling all
171174
* the data in the [[Dataset]]. If an application intends to perform an aggregation over each
172-
* key, it is best to use the reduce function or an [[Aggregator]].
175+
* key, it is best to use the reduce function or an
176+
* [[org.apache.spark.sql.expressions#Aggregator Aggregator]].
173177
*
174178
* Internally, the implementation will spill to disk if any given group is too large to fit into
175179
* memory. However, users must take care to avoid materializing the whole iterator for a group

0 commit comments

Comments
 (0)