@@ -101,7 +101,8 @@ class GroupedDataset[K, V] private[sql](
101
101
*
102
102
* This function does not support partial aggregation, and as a result requires shuffling all
103
103
* the data in the [[Dataset ]]. If an application intends to perform an aggregation over each
104
- * key, it is best to use the reduce function or an [[Aggregator ]].
104
+ * key, it is best to use the reduce function or an
105
+ * [[org.apache.spark.sql.expressions#Aggregator Aggregator ]].
105
106
*
106
107
* Internally, the implementation will spill to disk if any given group is too large to fit into
107
108
* memory. However, users must take care to avoid materializing the whole iterator for a group
@@ -128,7 +129,8 @@ class GroupedDataset[K, V] private[sql](
128
129
*
129
130
* This function does not support partial aggregation, and as a result requires shuffling all
130
131
* the data in the [[Dataset ]]. If an application intends to perform an aggregation over each
131
- * key, it is best to use the reduce function or an [[Aggregator ]].
132
+ * key, it is best to use the reduce function or an
133
+ * [[org.apache.spark.sql.expressions#Aggregator Aggregator ]].
132
134
*
133
135
* Internally, the implementation will spill to disk if any given group is too large to fit into
134
136
* memory. However, users must take care to avoid materializing the whole iterator for a group
@@ -148,7 +150,8 @@ class GroupedDataset[K, V] private[sql](
148
150
*
149
151
* This function does not support partial aggregation, and as a result requires shuffling all
150
152
* the data in the [[Dataset ]]. If an application intends to perform an aggregation over each
151
- * key, it is best to use the reduce function or an [[Aggregator ]].
153
+ * key, it is best to use the reduce function or an
154
+ * [[org.apache.spark.sql.expressions#Aggregator Aggregator ]].
152
155
*
153
156
* Internally, the implementation will spill to disk if any given group is too large to fit into
154
157
* memory. However, users must take care to avoid materializing the whole iterator for a group
@@ -169,7 +172,8 @@ class GroupedDataset[K, V] private[sql](
169
172
*
170
173
* This function does not support partial aggregation, and as a result requires shuffling all
171
174
* the data in the [[Dataset ]]. If an application intends to perform an aggregation over each
172
- * key, it is best to use the reduce function or an [[Aggregator ]].
175
+ * key, it is best to use the reduce function or an
176
+ * [[org.apache.spark.sql.expressions#Aggregator Aggregator ]].
173
177
*
174
178
* Internally, the implementation will spill to disk if any given group is too large to fit into
175
179
* memory. However, users must take care to avoid materializing the whole iterator for a group
0 commit comments