File tree Expand file tree Collapse file tree 1 file changed +1
-5
lines changed
sql/core/src/main/scala/org/apache/spark/sql/streaming Expand file tree Collapse file tree 1 file changed +1
-5
lines changed Original file line number Diff line number Diff line change @@ -218,7 +218,7 @@ final class DataStreamReader private[sql](sparkSession: SparkSession) extends Lo
218
218
* This function goes through the input once to determine the input schema. If you know the
219
219
* schema in advance, use the version that specifies the schema to avoid the extra scan.
220
220
*
221
- * You can set the following structured streaming option(s):
221
+ * You can set the following option(s):
222
222
* <ul>
223
223
* <li>`maxFilesPerTrigger` (default: no max limit): sets the maximum number of new files to be
224
224
* considered in every trigger.</li>
@@ -227,10 +227,6 @@ final class DataStreamReader private[sql](sparkSession: SparkSession) extends Lo
227
227
* You can find the JSON-specific options for reading JSON file stream in
228
228
* <a href="https://spark.apache.org/docs/latest/sql-data-sources-json.html#data-source-option">
229
229
* Data Source Option</a> in the version you use.
230
- * More general options can be found in
231
- * <a href=
232
- * "https://spark.apache.org/docs/latest/sql-data-sources-generic-options.html">
233
- * Generic Files Source Options</a> in the version you use.
234
230
*
235
231
* @since 2.0.0
236
232
*/
You can’t perform that action at this time.
0 commit comments