Skip to content

Commit a97a19d

Browse files
srowenHyukjinKwon
authored andcommitted
[SPARK-26807][DOCS] Clarify that Pyspark is on PyPi now
## What changes were proposed in this pull request? Docs still say that Spark will be available on PyPi "in the future"; just needs to be updated. ## How was this patch tested? Doc build Closes #23933 from srowen/SPARK-26807. Authored-by: Sean Owen <sean.owen@databricks.com> Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
1 parent 4a486d6 commit a97a19d

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

docs/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ Please see [Spark Security](security.html) before downloading and running Spark.
2020
Get Spark from the [downloads page](https://spark.apache.org/downloads.html) of the project website. This documentation is for Spark version {{site.SPARK_VERSION}}. Spark uses Hadoop's client libraries for HDFS and YARN. Downloads are pre-packaged for a handful of popular Hadoop versions.
2121
Users can also download a "Hadoop free" binary and run Spark with any Hadoop version
2222
[by augmenting Spark's classpath](hadoop-provided.html).
23-
Scala and Java users can include Spark in their projects using its Maven coordinates and in the future Python users can also install Spark from PyPI.
23+
Scala and Java users can include Spark in their projects using its Maven coordinates and Python users can install Spark from PyPI.
2424

2525

2626
If you'd like to build Spark from

0 commit comments

Comments
 (0)