You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/sql-migration-guide.md
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -95,7 +95,7 @@ license: |
95
95
96
96
- In Spark 3.2, `FloatType` is mapped to `FLOAT` in MySQL. Prior to this, it used to be mapped to `REAL`, which is by default a synonym to `DOUBLE PRECISION` in MySQL.
97
97
98
-
- In Spark 3.2, the query executions triggered by `DataFrameWriter` are always named `command` when being sent to `QueryExecutionListener`. In Spark 3.1 and earlier, the name is one of `save`, `insertInto`, `saveAsTable`, `create`, `append`, `overwrite`, `overwritePartitions`, `replace`.
98
+
- In Spark 3.2, the query executions triggered by `DataFrameWriter` are always named `command` when being sent to `QueryExecutionListener`. In Spark 3.1 and earlier, the name is one of `save`, `insertInto`, `saveAsTable`.
@@ -72,17 +72,94 @@ Preparing to Contribute Code Changes
72
72
------------------------------------
73
73
74
74
Before starting to work on codes in PySpark, it is recommended to read `the general guidelines <https://spark.apache.org/contributing.html>`_.
75
-
There are a couple of additional notes to keep in mind when contributing to codes in PySpark:
75
+
Additionally, there are a couple of additional notes to keep in mind when contributing to codes in PySpark:
76
+
77
+
* Be Pythonic
78
+
See `The Zen of Python <https://www.python.org/dev/peps/pep-0020/>`_.
79
+
80
+
* Match APIs with Scala and Java sides
81
+
Apache Spark is an unified engine that provides a consistent API layer. In general, the APIs are consistently supported across other languages.
82
+
83
+
* PySpark-specific APIs can be accepted
84
+
As long as they are Pythonic and do not conflict with other existent APIs, it is fine to raise a API request, for example, decorator usage of UDFs.
85
+
86
+
* Adjust the corresponding type hints if you extend or modify public API
87
+
See `Contributing and Maintaining Type Hints`_ for details.
88
+
89
+
If you are fixing pandas API on Spark (``pyspark.pandas``) package, please consider the design principles below:
90
+
91
+
* Return pandas-on-Spark data structure for big data, and pandas data structure for small data
92
+
Often developers face the question whether a particular function should return a pandas-on-Spark DataFrame/Series, or a pandas DataFrame/Series. The principle is: if the returned object can be large, use a pandas-on-Spark DataFrame/Series. If the data is bound to be small, use a pandas DataFrame/Series. For example, ``DataFrame.dtypes`` return a pandas Series, because the number of columns in a DataFrame is bounded and small, whereas ``DataFrame.head()`` or ``Series.unique()`` returns a pandas-on-Spark DataFrame/Series, because the resulting object can be large.
93
+
94
+
* Provide discoverable APIs for common data science tasks
95
+
At the risk of overgeneralization, there are two API design approaches: the first focuses on providing APIs for common tasks; the second starts with abstractions, and enables users to accomplish their tasks by composing primitives. While the world is not black and white, pandas takes more of the former approach, while Spark has taken more of the latter.
96
+
97
+
One example is value count (count by some key column), one of the most common operations in data science. pandas ``DataFrame.value_count`` returns the result in sorted order, which in 90% of the cases is what users prefer when exploring data, whereas Spark's does not sort, which is more desirable when building data pipelines, as users can accomplish the pandas behavior by adding an explicit ``orderBy``.
98
+
99
+
Similar to pandas, pandas API on Spark should also lean more towards the former, providing discoverable APIs for common data science tasks. In most cases, this principle is well taken care of by simply implementing pandas' APIs. However, there will be circumstances in which pandas' APIs don't address a specific need, e.g. plotting for big data.
100
+
101
+
* Guardrails to prevent users from shooting themselves in the foot
102
+
Certain operations in pandas are prohibitively expensive as data scales, and we don't want to give users the illusion that they can rely on such operations in pandas API on Spark. That is to say, methods implemented in pandas API on Spark should be safe to perform by default on large datasets. As a result, the following capabilities are not implemented in pandas API on Spark:
103
+
104
+
* Capabilities that are fundamentally not parallelizable: e.g. imperatively looping over each element
105
+
* Capabilities that require materializing the entire working set in a single node's memory. This is why we do not implement `pandas.DataFrame.to_xarray <https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_xarray.html>`_. Another example is the ``_repr_html_`` call caps the total number of records shown to a maximum of 1000, to prevent users from blowing up their driver node simply by typing the name of the DataFrame in a notebook.
106
+
107
+
A few exceptions, however, exist. One common pattern with "big data science" is that while the initial dataset is large, the working set becomes smaller as the analysis goes deeper. For example, data scientists often perform aggregation on datasets and want to then convert the aggregated dataset to some local data structure. To help data scientists, we offer the following:
108
+
109
+
* ``DataFrame.to_pandas``: returns a pandas DataFrame (pandas-on-Spark only)
110
+
* ``DataFrame.to_numpy``: returns a numpy array, works with both pandas and pandas API on Spark
111
+
112
+
Note that it is clear from the names that these functions return some local data structure that would require materializing data in a single node's memory. For these functions, we also explicitly document them with a warning note that the resulting data structure must be small.
113
+
114
+
115
+
Environment Setup
116
+
-----------------
117
+
118
+
Prerequisite
119
+
~~~~~~~~~~~~
120
+
121
+
PySpark development requires to build Spark that needs a proper JDK installed, etc. See `Building Spark <https://spark.apache.org/docs/latest/building-spark.html>`_ for more details.
122
+
123
+
Conda
124
+
~~~~~
125
+
126
+
If you are using Conda, the development environment can be set as follows.
127
+
128
+
.. code-block:: bash
129
+
130
+
# Python 3.6+ is required
131
+
conda create --name pyspark-dev-env python=3.9
132
+
conda activate pyspark-dev-env
133
+
pip install -r dev/requirements.txt
134
+
135
+
Once it is set up, make sure you switch to `pyspark-dev-env` before starting the development:
136
+
137
+
.. code-block:: bash
138
+
139
+
conda activate pyspark-dev-env
140
+
141
+
Now, you can start developing and `running the tests <testing.rst>`_.
142
+
143
+
pip
144
+
~~~
145
+
146
+
With Python 3.6+, pip can be used as below to install and set up the development environment.
147
+
148
+
.. code-block:: bash
149
+
150
+
pip install -r dev/requirements.txt
151
+
152
+
Now, you can start developing and `running the tests <testing.rst>`_.
76
153
77
-
* Be Pythonic.
78
-
* APIs are matched with Scala and Java sides in general.
79
-
* PySpark specific APIs can still be considered as long as they are Pythonic and do not conflict with other existent APIs, for example, decorator usage of UDFs.
80
-
* If you extend or modify public API, please adjust corresponding type hints. See `Contributing and Maintaining Type Hints`_ for details.
81
154
82
155
Contributing and Maintaining Type Hints
83
156
----------------------------------------
84
157
85
-
PySpark type hints are provided using stub files, placed in the same directory as the annotated module, with exception to ``# type: ignore`` in modules which don't have their own stubs (tests, examples and non-public API).
158
+
PySpark type hints are provided using stub files, placed in the same directory as the annotated module, with exception to:
159
+
160
+
* ``# type: ignore`` in modules which don't have their own stubs (tests, examples and non-public API).
161
+
* pandas API on Spark (``pyspark.pandas`` package) where the type hints are inlined.
162
+
86
163
As a rule of thumb, only public API is annotated.
87
164
88
165
Annotations should, when possible:
@@ -122,16 +199,38 @@ Annotations can be validated using ``dev/lint-python`` script or by invoking myp
122
199
mypy --config python/mypy.ini python/pyspark
123
200
124
201
125
-
126
202
Code and Docstring Guide
127
-
----------------------------------
203
+
------------------------
204
+
205
+
Code Conventions
206
+
~~~~~~~~~~~~~~~~
128
207
129
208
Please follow the style of the existing codebase as is, which is virtually PEP 8 with one exception: lines can be up
130
209
to 100 characters in length, not 79.
131
-
For the docstring style, PySpark follows `NumPy documentation style <https://numpydoc.readthedocs.io/en/latest/format.html>`_.
132
210
133
-
Note that the method and variable names in PySpark are the similar case is ``threading`` library in Python itself where
134
-
the APIs were inspired by Java. PySpark also follows `camelCase` for exposed APIs that match with Scala and Java.
135
-
There is an exception ``functions.py`` that uses `snake_case`. It was in order to make APIs SQL (and Python) friendly.
211
+
Note that:
212
+
213
+
* the method and variable names in PySpark are the similar case is ``threading`` library in Python itself where the APIs were inspired by Java. PySpark also follows `camelCase` for exposed APIs that match with Scala and Java.
214
+
215
+
* In contrast, ``functions.py`` uses `snake_case` in order to make APIs SQL (and Python) friendly.
216
+
217
+
* In addition, pandas-on-Spark (``pyspark.pandas``) also uses `snake_case` because this package is free from API consistency with other languages.
136
218
137
219
PySpark leverages linters such as `pycodestyle <https://pycodestyle.pycqa.org/en/latest/>`_ and `flake8 <https://flake8.pycqa.org/en/latest/>`_, which ``dev/lint-python`` runs. Therefore, make sure to run that script to double check.
In general, doctests should be grouped logically by separating a newline.
232
+
233
+
For instance, the first block is for the statements for preparation, the second block is for using the function with a specific argument,
234
+
and third block is for another argument. As a example, please refer `DataFrame.rsub <https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.rsub.html#pandas.DataFrame.rsub>`_ in pandas.
235
+
236
+
These blocks should be consistently separated in PySpark doctests, and more doctests should be added if the coverage of the doctests or the number of examples to show is not enough.
0 commit comments