Skip to content

Commit da1151c

Browse files
simonjayhawkinsjreback
authored andcommitted
CLN: remove versionadded:: 0.20 (#29126)
1 parent 9828d34 commit da1151c

39 files changed

+0
-162
lines changed

doc/source/development/contributing.rst

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1197,8 +1197,6 @@ submitting a pull request.
11971197

11981198
For more, see the `pytest <http://docs.pytest.org/en/latest/>`_ documentation.
11991199

1200-
.. versionadded:: 0.20.0
1201-
12021200
Furthermore one can run
12031201

12041202
.. code-block:: python

doc/source/getting_started/basics.rst

Lines changed: 0 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -172,8 +172,6 @@ You are highly encouraged to install both libraries. See the section
172172

173173
These are both enabled to be used by default, you can control this by setting the options:
174174

175-
.. versionadded:: 0.20.0
176-
177175
.. code-block:: python
178176
179177
pd.set_option('compute.use_bottleneck', False)
@@ -891,8 +889,6 @@ functionality.
891889
Aggregation API
892890
~~~~~~~~~~~~~~~
893891

894-
.. versionadded:: 0.20.0
895-
896892
The aggregation API allows one to express possibly multiple aggregation operations in a single concise way.
897893
This API is similar across pandas objects, see :ref:`groupby API <groupby.aggregate>`, the
898894
:ref:`window functions API <stats.aggregate>`, and the :ref:`resample API <timeseries.aggregate>`.
@@ -1030,8 +1026,6 @@ to the built in :ref:`describe function <basics.describe>`.
10301026
Transform API
10311027
~~~~~~~~~~~~~
10321028

1033-
.. versionadded:: 0.20.0
1034-
10351029
The :meth:`~DataFrame.transform` method returns an object that is indexed the same (same size)
10361030
as the original. This API allows you to provide *multiple* operations at the same
10371031
time rather than one-by-one. Its API is quite similar to the ``.agg`` API.

doc/source/user_guide/advanced.rst

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -206,8 +206,6 @@ highly performant. If you want to see only the used levels, you can use the
206206
To reconstruct the ``MultiIndex`` with only the used levels, the
207207
:meth:`~MultiIndex.remove_unused_levels` method may be used.
208208

209-
.. versionadded:: 0.20.0
210-
211209
.. ipython:: python
212210
213211
new_mi = df[['foo', 'qux']].columns.remove_unused_levels()
@@ -928,8 +926,6 @@ If you need integer based selection, you should use ``iloc``:
928926
IntervalIndex
929927
~~~~~~~~~~~~~
930928

931-
.. versionadded:: 0.20.0
932-
933929
:class:`IntervalIndex` together with its own dtype, :class:`~pandas.api.types.IntervalDtype`
934930
as well as the :class:`Interval` scalar type, allow first-class support in pandas
935931
for interval notation.

doc/source/user_guide/categorical.rst

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -874,8 +874,6 @@ The below raises ``TypeError`` because the categories are ordered and not identi
874874
Out[3]:
875875
TypeError: to union ordered Categoricals, all categories must be the same
876876
877-
.. versionadded:: 0.20.0
878-
879877
Ordered categoricals with different categories or orderings can be combined by
880878
using the ``ignore_ordered=True`` argument.
881879

doc/source/user_guide/computation.rst

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -471,8 +471,6 @@ default of the index) in a DataFrame.
471471
Rolling window endpoints
472472
~~~~~~~~~~~~~~~~~~~~~~~~
473473

474-
.. versionadded:: 0.20.0
475-
476474
The inclusion of the interval endpoints in rolling window calculations can be specified with the ``closed``
477475
parameter:
478476

doc/source/user_guide/groupby.rst

Lines changed: 0 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -311,8 +311,6 @@ Grouping with multiple levels is supported.
311311
s
312312
s.groupby(level=['first', 'second']).sum()
313313
314-
.. versionadded:: 0.20
315-
316314
Index level names may be supplied as keys.
317315

318316
.. ipython:: python
@@ -353,8 +351,6 @@ Index levels may also be specified by name.
353351
354352
df.groupby([pd.Grouper(level='second'), 'A']).sum()
355353
356-
.. versionadded:: 0.20
357-
358354
Index level names may be specified as keys directly to ``groupby``.
359355

360356
.. ipython:: python
@@ -1274,8 +1270,6 @@ To see the order in which each row appears within its group, use the
12741270
Enumerate groups
12751271
~~~~~~~~~~~~~~~~
12761272

1277-
.. versionadded:: 0.20.2
1278-
12791273
To see the ordering of the groups (as opposed to the order of rows
12801274
within a group given by ``cumcount``) you can use
12811275
:meth:`~pandas.core.groupby.DataFrameGroupBy.ngroup`.

doc/source/user_guide/io.rst

Lines changed: 0 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -163,9 +163,6 @@ dtype : Type name or dict of column -> type, default ``None``
163163
(unsupported with ``engine='python'``). Use `str` or `object` together
164164
with suitable ``na_values`` settings to preserve and
165165
not interpret dtype.
166-
167-
.. versionadded:: 0.20.0 support for the Python parser.
168-
169166
engine : {``'c'``, ``'python'``}
170167
Parser engine to use. The C engine is faster while the Python engine is
171168
currently more feature-complete.
@@ -417,10 +414,6 @@ However, if you wanted for all the data to be coerced, no matter the type, then
417414
using the ``converters`` argument of :func:`~pandas.read_csv` would certainly be
418415
worth trying.
419416

420-
.. versionadded:: 0.20.0 support for the Python parser.
421-
422-
The ``dtype`` option is supported by the 'python' engine.
423-
424417
.. note::
425418
In some cases, reading in abnormal data with columns containing mixed dtypes
426419
will result in an inconsistent dataset. If you rely on pandas to infer the
@@ -616,8 +609,6 @@ Filtering columns (``usecols``)
616609
The ``usecols`` argument allows you to select any subset of the columns in a
617610
file, either using the column names, position numbers or a callable:
618611

619-
.. versionadded:: 0.20.0 support for callable `usecols` arguments
620-
621612
.. ipython:: python
622613
623614
data = 'a,b,c,d\n1,2,3,foo\n4,5,6,bar\n7,8,9,baz'
@@ -1447,8 +1438,6 @@ is whitespace).
14471438
df = pd.read_fwf('bar.csv', header=None, index_col=0)
14481439
df
14491440
1450-
.. versionadded:: 0.20.0
1451-
14521441
``read_fwf`` supports the ``dtype`` parameter for specifying the types of
14531442
parsed columns to be different from the inferred type.
14541443

@@ -2221,8 +2210,6 @@ For line-delimited json files, pandas can also return an iterator which reads in
22212210
Table schema
22222211
''''''''''''
22232212

2224-
.. versionadded:: 0.20.0
2225-
22262213
`Table Schema`_ is a spec for describing tabular datasets as a JSON
22272214
object. The JSON includes information on the field names, types, and
22282215
other attributes. You can use the orient ``table`` to build
@@ -3071,8 +3058,6 @@ missing data to recover integer dtype:
30713058
Dtype specifications
30723059
++++++++++++++++++++
30733060

3074-
.. versionadded:: 0.20
3075-
30763061
As an alternative to converters, the type for an entire column can
30773062
be specified using the `dtype` keyword, which takes a dictionary
30783063
mapping column names to types. To interpret data with
@@ -3345,8 +3330,6 @@ any pickled pandas object (or any other pickled object) from file:
33453330
Compressed pickle files
33463331
'''''''''''''''''''''''
33473332

3348-
.. versionadded:: 0.20.0
3349-
33503333
:func:`read_pickle`, :meth:`DataFrame.to_pickle` and :meth:`Series.to_pickle` can read
33513334
and write compressed pickle files. The compression types of ``gzip``, ``bz2``, ``xz`` are supported for reading and writing.
33523335
The ``zip`` file format only supports reading and must contain only one data file
@@ -4323,8 +4306,6 @@ control compression: ``complevel`` and ``complib``.
43234306
- `bzip2 <http://bzip.org/>`_: Good compression rates.
43244307
- `blosc <http://www.blosc.org/>`_: Fast compression and decompression.
43254308

4326-
.. versionadded:: 0.20.2
4327-
43284309
Support for alternative blosc compressors:
43294310

43304311
- `blosc:blosclz <http://www.blosc.org/>`_ This is the
@@ -4651,8 +4632,6 @@ Performance
46514632
Feather
46524633
-------
46534634

4654-
.. versionadded:: 0.20.0
4655-
46564635
Feather provides binary columnar serialization for data frames. It is designed to make reading and writing data
46574636
frames efficient, and to make sharing data across data analysis languages easy.
46584637

doc/source/user_guide/merging.rst

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -843,8 +843,6 @@ resulting dtype will be upcast.
843843
pd.merge(left, right, how='outer', on='key')
844844
pd.merge(left, right, how='outer', on='key').dtypes
845845
846-
.. versionadded:: 0.20.0
847-
848846
Merging will preserve ``category`` dtypes of the mergands. See also the section on :ref:`categoricals <categorical.merge>`.
849847

850848
The left frame.

doc/source/user_guide/options.rst

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -561,8 +561,6 @@ However, setting this option incorrectly for your terminal will cause these char
561561
Table schema display
562562
--------------------
563563

564-
.. versionadded:: 0.20.0
565-
566564
``DataFrame`` and ``Series`` will publish a Table Schema representation
567565
by default. False by default, this can be enabled globally with the
568566
``display.html.table_schema`` option:

doc/source/user_guide/reshaping.rst

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -539,8 +539,6 @@ Alternatively we can specify custom bin-edges:
539539
c = pd.cut(ages, bins=[0, 18, 35, 70])
540540
c
541541
542-
.. versionadded:: 0.20.0
543-
544542
If the ``bins`` keyword is an ``IntervalIndex``, then these will be
545543
used to bin the passed data.::
546544

0 commit comments

Comments
 (0)