Skip to content

Updated read_excel docstring to match style guide formatting #53953

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
144 changes: 72 additions & 72 deletions pandas/io/excel/_base.py
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@
)
_read_excel_doc = (
"""
Read an Excel file into a pandas DataFrame.
Read an Excel file into a ``pandas`` ``DataFrame``.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm thinking we shouldn't wrap DataFrame throughout. In my opinion, it doesn't need the highlighting, and it can serve as a distraction.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good call. I wasn't sure on that one so I'm glad you provided some clarity on it. It could go either way, but I agree that it gets distracting with how often DataFrame is referenced in the docs.


Supports `xls`, `xlsx`, `xlsm`, `xlsb`, `odf`, `ods` and `odt` file extensions
read from a local filesystem or URL. Supports an option to read
Expand All @@ -101,61 +101,61 @@
Strings are used for sheet names. Integers are used in zero-indexed
sheet positions (chart sheets do not count as a sheet position).
Lists of strings/integers are used to request multiple sheets.
Specify None to get all worksheets.
Specify ``None`` to get all worksheets.

Available cases:

* Defaults to ``0``: 1st sheet as a `DataFrame`
* ``1``: 2nd sheet as a `DataFrame`
* Defaults to ``0``: 1st sheet as a ``DataFrame``
* ``1``: 2nd sheet as a ``DataFrame``
* ``"Sheet1"``: Load sheet with name "Sheet1"
* ``[0, 1, "Sheet5"]``: Load first, second and sheet named "Sheet5"
as a dict of `DataFrame`
* None: All worksheets.
as a dict of ``DataFrame``
* ``None``: All worksheets.

header : int, list of int, default 0
Row (0-indexed) to use for the column labels of the parsed
DataFrame. If a list of integers is passed those row positions will
be combined into a ``MultiIndex``. Use None if there is no header.
``DataFrame``. If a list of integers is passed those row positions will
be combined into a ``MultiIndex``. Use ``None`` if there is no header.
names : array-like, default None
List of column names to use. If file contains no header row,
then you should explicitly pass header=None.
then you should explicitly pass ``header=None``.
index_col : int, str, list of int, default None
Column (0-indexed) to use as the row labels of the DataFrame.
Column (0-indexed) to use as the row labels of the ``DataFrame``.
Pass None if there is no such column. If a list is passed,
those columns will be combined into a ``MultiIndex``. If a
subset of data is selected with ``usecols``, index_col
subset of data is selected with ``usecols``, ``index_col``
is based on the subset.

Missing values will be forward filled to allow roundtripping with
``to_excel`` for ``merged_cells=True``. To avoid forward filling the
missing values use ``set_index`` after reading the data instead of
``index_col``.
usecols : str, list-like, or callable, default None
* If None, then parse all columns.
* If str, then indicates comma separated list of Excel column letters
* If ``None``, then parse all columns.
* If ``str``, then indicates comma separated list of Excel column letters
and column ranges (e.g. "A:E" or "A,C,E:F"). Ranges are inclusive of
both sides.
* If list of int, then indicates list of column numbers to be parsed
* If list of ``int``, then indicates list of column numbers to be parsed
(0-indexed).
* If list of string, then indicates list of column names to be parsed.
* If list of ``str``, then indicates list of column names to be parsed.
* If callable, then evaluate each column name against it and parse the
column if the callable returns ``True``.

Returns a subset of the columns according to behavior above.
dtype : Type name or dict of column -> type, default None
Data type for data or columns. E.g. {{'a': np.float64, 'b': np.int32}}
Data type for data or columns. E.g. ``{'a': np.float64, 'b': np.int32}``
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe you need to leave the double curly braces {{ and }} for jinja-style templating.

Copy link
Author

@GarrettDaniel GarrettDaniel Jul 3, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do agree that we need that for jinja-style templating, but from what I can tell, we aren't using jinja-style templating in this case. If we were, wouldn't we be passing in parameters or referencing an external file to be read in and rendered like a formatted string? (https://realpython.com/primer-on-jinja-templating/). At the moment, it renders like a regular string, but this seems like a perfect use case for a code block.

image

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed on using the code-block for this, but I believe the change to the curly braces is making the CI fail; e.g.:

https://github.com/pandas-dev/pandas/actions/runs/5426779826/jobs/9869261284?pr=53953#step:9:54

Copy link
Author

@GarrettDaniel GarrettDaniel Jul 3, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok that's good to know. In that case, should we just leave it to render as a string with the double curly braces so that CI won't fail? I suppose we could put that in a code block but it might look a little strange (i.e. {{'a': np.float64, 'b': np.int32}})

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(i.e. {{'a': np.float64, 'b': np.int32}})

When you do this, do both curly braces render in the docs? I would expect only one renders.

Copy link
Author

@GarrettDaniel GarrettDaniel Jul 3, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've been trying to get the docs to render as per the guide (https://pandas.pydata.org/docs/development/contributing_documentation.html#building-the-documentation), but I keep running into ModuleNotFoundError: No module named 'pandas.__libs.pandas_parser' when running python3 make.py html in the doc directory. This is most likely because of the version of pandas I have on my laptop, but I tried upgrading to the newest version and uninstalling + reinstalling and the error still persists. At this point, I'll commit it with the code block around the braces and if the CI build still fails I'll just remove the code block and leave it to render as standard text.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did you create a development environment and compile/install pandas?

https://pandas.pydata.org/docs/development/contributing_environment.html

Copy link
Author

@GarrettDaniel GarrettDaniel Jul 13, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've tried a few of the different methods on that doc, namely with Docker, DevContainers, and Mamba, and none of them were able to successfully import pandas. I believe it has something to do with a corporate proxy or security issue, but I either run into that same ModuleNotFoundError, or I run into ERROR: Disabling PEP 517 processing is invalid: project specifies a build backend of mesonpy in pyproject.toml when trying to run python -m pip install -e . --no-build-isolation --no-use-pep517. If you have any ideas on this, let me know. Otherwise, I've made all of the other requested changes in the other comments.

Copy link
Member

@rhshadrach rhshadrach Jul 15, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe you're going off the stable (e.g. 2.0.3) version of the docs for building pandas. When reading docs on development, it's best to read the dev docs as we will break dev-specific things well before releasing 😆. See here:

https://pandas.pydata.org/pandas-docs/dev/development/contributing_environment.html#step-3-build-and-install-pandas

Copy link
Member

@rhshadrach rhshadrach Jul 15, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Otherwise, I've made all of the other requested changes in the other comments.

The CI is failing; this will need to be addressed (which is easiest when you can build locally).

Use `object` to preserve data as stored in Excel and not interpret dtype.
If converters are specified, they will be applied INSTEAD
of dtype conversion.
of ``dtype`` conversion.
engine : str, default None
If io is not a buffer or path, this must be set to identify io.
Supported engines: "xlrd", "openpyxl", "odf", "pyxlsb".
If ``io`` is not a buffer or path, this must be set to identify ``io``.
Supported engines: ``"xlrd"``, ``"openpyxl"``, ``"odf"``, ``"pyxlsb"``.
Engine compatibility :

- "xlrd" supports old-style Excel files (.xls).
- "openpyxl" supports newer Excel file formats.
- "odf" supports OpenDocument file formats (.odf, .ods, .odt).
- "pyxlsb" supports Binary Excel files.
- ``"xlrd"`` supports old-style Excel files (.xls).
- ``"openpyxl"`` supports newer Excel file formats.
- ``"odf"`` supports OpenDocument file formats (.odf, .ods, .odt).
- ``"pyxlsb"`` supports Binary Excel files.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would suggest no double quotes here (so just e.g. ``xlrd``). We can just be referring to the engine itself, rather than the string argument.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good call! I'll make sure to change that in the commit I'm working on.


.. versionchanged:: 1.2.0
The engine `xlrd <https://xlrd.readthedocs.io/en/latest/>`_
Expand All @@ -181,70 +181,70 @@
input argument, the Excel cell content, and return the transformed
content.
true_values : list, default None
Values to consider as True.
Values to consider as ``True``.
false_values : list, default None
Values to consider as False.
Values to consider as ``False``.
skiprows : list-like, int, or callable, optional
Line numbers to skip (0-indexed) or number of lines to skip (int) at the
Line numbers to skip (0-indexed) or number of lines to skip (``int``) at the
start of the file. If callable, the callable function will be evaluated
against the row indices, returning True if the row should be skipped and
against the row indices, returning ``True`` if the row should be skipped and
False otherwise. An example of a valid callable argument would be ``lambda
x: x in [0, 2]``.
nrows : int, default None
Number of rows to parse.
na_values : scalar, str, list-like, or dict, default None
Additional strings to recognize as NA/NaN. If dict passed, specific
Additional strings to recognize as NA/NaN. If ``dict`` passed, specific
per-column NA values. By default the following values are interpreted
as NaN: '"""
+ fill("', '".join(sorted(STR_NA_VALUES)), 70, subsequent_indent=" ")
+ """'.
keep_default_na : bool, default True
Whether or not to include the default NaN values when parsing the data.
Depending on whether `na_values` is passed in, the behavior is as follows:

* If `keep_default_na` is True, and `na_values` are specified, `na_values`
is appended to the default NaN values used for parsing.
* If `keep_default_na` is True, and `na_values` are not specified, only
the default NaN values are used for parsing.
* If `keep_default_na` is False, and `na_values` are specified, only
the NaN values specified `na_values` are used for parsing.
* If `keep_default_na` is False, and `na_values` are not specified, no
strings will be parsed as NaN.

Note that if `na_filter` is passed in as False, the `keep_default_na` and
`na_values` parameters will be ignored.
Whether or not to include the default ``NaN`` values when parsing the data.
Depending on whether ``na_values`` is passed in, the behavior is as follows:

* If ``keep_default_na=True``, and ``na_values`` are specified, ``na_values``
is appended to the default ``NaN`` values used for parsing.
* If ``keep_default_na=True``, and ``na_values`` are not specified, only
the default ``NaN`` values are used for parsing.
* If ``keep_default_na=False``, and ``na_values`` are specified, only
the ``NaN`` values specified ``na_values`` are used for parsing.
* If ``keep_default_na=False``, and ``na_values`` are not specified, no
strings will be parsed as ``NaN``.

Note that if ``na_filter=False``, the ``keep_default_na`` and
``na_values`` parameters will be ignored.
na_filter : bool, default True
Detect missing value markers (empty strings and the value of na_values). In
data without any NAs, passing na_filter=False can improve the performance
Detect missing value markers (empty strings and the value of ``na_values``). In
data without any NAs, ``passing na_filter=False`` can improve the performance
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

passing shouldn't be included (just na_filter=False)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice catch!

of reading a large file.
verbose : bool, default False
Indicate number of NA values placed in non-numeric columns.
parse_dates : bool, list-like, or dict, default False
The behavior is as follows:

* bool. If True -> try parsing the index.
* list of int or names. e.g. If [1, 2, 3] -> try parsing columns 1, 2, 3
* ``bool``. If True -> try parsing the index.
* ``list`` of ``int`` or names. e.g. If ``[1, 2, 3]`` -> try parsing columns 1, 2, 3
each as a separate date column.
* list of lists. e.g. If [[1, 3]] -> combine columns 1 and 3 and parse as
* ``list`` of lists. e.g. If ``[[1, 3]]`` -> combine columns 1 and 3 and parse as
a single date column.
* dict, e.g. {{'foo' : [1, 3]}} -> parse columns 1, 3 as date and call
result 'foo'
* ``dict``, e.g. ``{'foo' : [1, 3]}`` -> parse columns 1, 3 as date and call
result ``'foo'``

If a column or index contains an unparsable date, the entire column or
index will be returned unaltered as an object data type. If you don`t want to
parse some cells as date just change their type in Excel to "Text".
For non-standard datetime parsing, use ``pd.to_datetime`` after ``pd.read_excel``.
parse some cells as date, just change their type in Excel to "Text".
For non-standard ``datetime`` parsing, use ``pd.to_datetime`` after ``pd.read_excel``.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think datetime here is being used in a technical sense (e.g. specifying a package or snippet of code), and so shouldn't highlighted.

Copy link
Author

@GarrettDaniel GarrettDaniel Jul 3, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point. If it was referencing the datetime package or dtype that would be a more appropriate time to put it in a codeblock.


Note: A fast-path exists for iso8601-formatted dates.
date_parser : function, optional
Function to use for converting a sequence of string columns to an array of
datetime instances. The default uses ``dateutil.parser.parser`` to do the
conversion. Pandas will try to call `date_parser` in three different ways,
``datetime`` instances. The default uses ``dateutil.parser.parser`` to do the
conversion. Pandas will try to call ``date_parser`` in three different ways,
advancing to the next if an exception occurs: 1) Pass one or more arrays
(as defined by `parse_dates`) as arguments; 2) concatenate (row-wise) the
string values from the columns defined by `parse_dates` into a single array
and pass that; and 3) call `date_parser` once for each row using one or
more strings (corresponding to the columns defined by `parse_dates`) as
(as defined by ``parse_dates``) as arguments; 2) concatenate (row-wise) the
string values from the columns defined by ``parse_dates`` into a single array
and pass that; and 3) call ``date_parser`` once for each row using one or
more strings (corresponding to the columns defined by ``parse_dates``) as
arguments.

.. deprecated:: 2.0.0
Expand Down Expand Up @@ -279,13 +279,13 @@

.. versionadded:: 1.2.0

dtype_backend : {{"numpy_nullable", "pyarrow"}}, defaults to NumPy backed DataFrames
Which dtype_backend to use, e.g. whether a DataFrame should have NumPy
arrays, nullable dtypes are used for all dtypes that have a nullable
implementation when "numpy_nullable" is set, pyarrow is used for all
dtypes if "pyarrow" is set.
dtype_backend : {{"numpy_nullable", "pyarrow"}}, defaults to ``numpy`` backed ``DataFrames``
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NumPy is the correct capitalization, now?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is, I just wasn't sure if it would be better to go with NumPy or numpy, but like the comment above with DataFrame and pandas, I agree that it would be distracting to make every instance of NumPy a code block.

Which ``dtype_backend`` to use, e.g. whether a ``DataFrame`` should have ``numpy``
arrays, nullable ``dtypes`` are used for all ``dtypes`` that have a nullable
implementation when ``"numpy_nullable"`` is set, ``pyarrow`` is used for all
dtypes if ``"pyarrow"`` is set.

The dtype_backends are still experimential.
The ``dtype_backends`` are still experimential.

.. versionadded:: 2.0

Expand All @@ -295,15 +295,15 @@
Returns
-------
DataFrame or dict of DataFrames
DataFrame from the passed in Excel file. See notes in sheet_name
argument for more information on when a dict of DataFrames is returned.
``DataFrame`` from the passed in Excel file. See notes in ``sheet_name``
argument for more information on when a ``dict`` of ``DataFrames`` is returned.

See Also
--------
DataFrame.to_excel : Write DataFrame to an Excel file.
DataFrame.to_csv : Write DataFrame to a comma-separated values (csv) file.
read_csv : Read a comma-separated values (csv) file into DataFrame.
read_fwf : Read a table of fixed-width formatted lines into DataFrame.
DataFrame.to_excel : Write ``DataFrame`` to an Excel file.
DataFrame.to_csv : Write ``DataFrame`` to a comma-separated values (csv) file.
read_csv : Read a comma-separated values (csv) file into ``DataFrame``.
read_fwf : Read a table of fixed-width formatted lines into ``DataFrame``.

Notes
-----
Expand All @@ -327,7 +327,7 @@
1 1 string2 2
2 2 #Comment 3

Index and header can be specified via the `index_col` and `header` arguments
Index and header can be specified via the ``index_col`` and ``header`` arguments

>>> pd.read_excel('tmp.xlsx', index_col=None, header=None) # doctest: +SKIP
0 1 2
Expand All @@ -345,7 +345,7 @@
1 string2 2.0
2 #Comment 3.0

True, False, and NA values, and thousands separators have defaults,
``True``, ``False``, ``NaN`` values, and thousands of separators have defaults,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The docs use NA as opposed to NaN in various places because it can refer to pd.NA. Can you revert this change.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On it.

but can be explicitly specified, too. Supply the values you would like
as strings or lists of strings!

Expand All @@ -356,7 +356,7 @@
1 NaN 2
2 #Comment 3

Comment lines in the excel input file can be skipped using the `comment` kwarg
Comment lines in the excel input file can be skipped using the ``comment`` ``kwarg``
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

kwarg here should not be included. Can you write out the full phrase here: keyword argument.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good call. Should comment still be in a code block in this case since it's referencing the name of a parameter?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes - I think so.


>>> pd.read_excel('tmp.xlsx', index_col=0, comment='#') # doctest: +SKIP
Name Value
Expand Down