Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 0 additions & 2 deletions doc/source/user_guide/duplicates.rst
Original file line number Diff line number Diff line change
Expand Up @@ -109,8 +109,6 @@ with the same label.
Disallowing Duplicate Labels
~~~~~~~~~~~~~~~~~~~~~~~~~~~~

.. versionadded:: 1.2.0

As noted above, handling duplicates is an important feature when reading in raw
data. That said, you may want to avoid introducing duplicates as part of a data
processing pipeline (from methods like :meth:`pandas.concat`,
Expand Down
2 changes: 0 additions & 2 deletions doc/source/user_guide/groupby.rst
Original file line number Diff line number Diff line change
Expand Up @@ -1264,8 +1264,6 @@ with
Numba accelerated routines
--------------------------

.. versionadded:: 1.1

If `Numba <https://numba.pydata.org/>`__ is installed as an optional dependency, the ``transform`` and
``aggregate`` methods support ``engine='numba'`` and ``engine_kwargs`` arguments.
See :ref:`enhancing performance with Numba <enhancingperf.numba>` for general usage of the arguments
Expand Down
34 changes: 2 additions & 32 deletions doc/source/user_guide/io.rst
Original file line number Diff line number Diff line change
Expand Up @@ -158,12 +158,6 @@ dtype : Type name or dict of column -> type, default ``None``
and not interpret dtype. If converters are specified, they will be applied INSTEAD
of dtype conversion.

.. versionadded:: 1.5.0

Support for defaultdict was added. Specify a defaultdict as input where
the default determines the dtype of the columns which are not explicitly
listed.

dtype_backend : {"numpy_nullable", "pyarrow"}, defaults to NumPy backed DataFrames
Which dtype_backend to use, e.g. whether a DataFrame should have NumPy
arrays, nullable dtypes are used for all dtypes that have a nullable
Expand All @@ -177,12 +171,8 @@ dtype_backend : {"numpy_nullable", "pyarrow"}, defaults to NumPy backed DataFram
engine : {``'c'``, ``'python'``, ``'pyarrow'``}
Parser engine to use. The C and pyarrow engines are faster, while the python engine
is currently more feature-complete. Multithreading is currently only supported by
the pyarrow engine.

.. versionadded:: 1.4.0

The "pyarrow" engine was added as an *experimental* engine, and some features
are unsupported, or may not work correctly, with this engine.
the pyarrow engine.The "pyarrow" engine was added as an *experimental* engine,
and some features are unsupported, or may not work correctly, with this engine.
converters : dict, default ``None``
Dict of functions for converting values in certain columns. Keys can either be
integers or column labels.
Expand Down Expand Up @@ -357,8 +347,6 @@ on_bad_lines : {{'error', 'warn', 'skip'}}, default 'error'
- 'warn', print a warning when a bad line is encountered and skip that line.
- 'skip', skip bad lines without raising or warning when they are encountered.

.. versionadded:: 1.3.0

.. _io.dtypes:

Specifying column data types
Expand Down Expand Up @@ -937,8 +925,6 @@ DD/MM/YYYY instead. For convenience, a ``dayfirst`` keyword is provided:
Writing CSVs to binary file objects
+++++++++++++++++++++++++++++++++++

.. versionadded:: 1.2.0

``df.to_csv(..., mode="wb")`` allows writing a CSV to a file object
opened binary mode. In most cases, it is not necessary to specify
``mode`` as pandas will auto-detect whether the file object is
Expand Down Expand Up @@ -1124,8 +1110,6 @@ You can elect to skip bad lines:
data = "a,b,c\n1,2,3\n4,5,6,7\n8,9,10"
pd.read_csv(StringIO(data), on_bad_lines="skip")

.. versionadded:: 1.4.0

Or pass a callable function to handle the bad line if ``engine="python"``.
The bad line will be a list of strings that was split by the ``sep``:

Expand Down Expand Up @@ -1553,8 +1537,6 @@ functions - the following example shows reading a CSV file:

df = pd.read_csv("https://download.bls.gov/pub/time.series/cu/cu.item", sep="\t")

.. versionadded:: 1.3.0

A custom header can be sent alongside HTTP(s) requests by passing a dictionary
of header key value mappings to the ``storage_options`` keyword argument as shown below:

Expand Down Expand Up @@ -1606,8 +1588,6 @@ More sample configurations and documentation can be found at `S3Fs documentation
If you do *not* have S3 credentials, you can still access public
data by specifying an anonymous connection, such as

.. versionadded:: 1.2.0

.. code-block:: python

pd.read_csv(
Expand Down Expand Up @@ -2541,8 +2521,6 @@ Links can be extracted from cells along with the text using ``extract_links="all
df[("GitHub", None)]
df[("GitHub", None)].str[1]

.. versionadded:: 1.5.0

.. _io.html:

Writing to HTML files
Expand Down Expand Up @@ -2732,8 +2710,6 @@ parse HTML tables in the top-level pandas io function ``read_html``.
LaTeX
-----

.. versionadded:: 1.3.0

Currently there are no methods to read from LaTeX, only output methods.

Writing to LaTeX files
Expand Down Expand Up @@ -2772,8 +2748,6 @@ XML
Reading XML
'''''''''''

.. versionadded:: 1.3.0

The top-level :func:`~pandas.io.xml.read_xml` function can accept an XML
string/file/URL and will parse nodes and attributes into a pandas ``DataFrame``.

Expand Down Expand Up @@ -3099,8 +3073,6 @@ supports parsing such sizeable files using `lxml's iterparse`_ and `etree's iter
which are memory-efficient methods to iterate through an XML tree and extract specific elements and attributes.
without holding entire tree in memory.

.. versionadded:: 1.5.0

.. _`lxml's iterparse`: https://lxml.de/3.2/parsing.html#iterparse-and-iterwalk
.. _`etree's iterparse`: https://docs.python.org/3/library/xml.etree.elementtree.html#xml.etree.ElementTree.iterparse

Expand Down Expand Up @@ -3139,8 +3111,6 @@ of reading in Wikipedia's very large (12 GB+) latest article data dump.
Writing XML
'''''''''''

.. versionadded:: 1.3.0

``DataFrame`` objects have an instance method ``to_xml`` which renders the
contents of the ``DataFrame`` as an XML document.

Expand Down
2 changes: 0 additions & 2 deletions doc/source/user_guide/reshaping.rst
Original file line number Diff line number Diff line change
Expand Up @@ -478,8 +478,6 @@ The values can be cast to a different type using the ``dtype`` argument.

pd.get_dummies(df, dtype=np.float32).dtypes

.. versionadded:: 1.5.0

:func:`~pandas.from_dummies` converts the output of :func:`~pandas.get_dummies` back into
a :class:`Series` of categorical values from indicator values.

Expand Down
2 changes: 0 additions & 2 deletions doc/source/user_guide/text.rst
Original file line number Diff line number Diff line change
Expand Up @@ -335,8 +335,6 @@ regular expression object will raise a ``ValueError``.
``removeprefix`` and ``removesuffix`` have the same effect as ``str.removeprefix`` and ``str.removesuffix`` added in
`Python 3.9 <https://docs.python.org/3/library/stdtypes.html#str.removeprefix>`__:

.. versionadded:: 1.4.0

.. ipython:: python

s = pd.Series(["str_foo", "str_bar", "no_prefix"])
Expand Down
2 changes: 0 additions & 2 deletions doc/source/user_guide/timeseries.rst
Original file line number Diff line number Diff line change
Expand Up @@ -1964,8 +1964,6 @@ Note the use of ``'start'`` for ``origin`` on the last example. In that case, ``
Backward resample
~~~~~~~~~~~~~~~~~

.. versionadded:: 1.3.0

Instead of adjusting the beginning of bins, sometimes we need to fix the end of the bins to make a backward resample with a given ``freq``. The backward resample sets ``closed`` to ``'right'`` by default since the last value should be considered as the edge point for the last bin.

We can set ``origin`` to ``'end'``. The value for a specific ``Timestamp`` index stands for the resample result from the current ``Timestamp`` minus ``freq`` to the current ``Timestamp`` with a right close.
Expand Down
2 changes: 0 additions & 2 deletions doc/source/user_guide/visualization.rst
Original file line number Diff line number Diff line change
Expand Up @@ -649,8 +649,6 @@ each point:
If a categorical column is passed to ``c``, then a discrete colorbar will be produced:

.. versionadded:: 1.3.0

.. ipython:: python
@savefig scatter_plot_categorical.png
Expand Down
14 changes: 2 additions & 12 deletions doc/source/user_guide/window.rst
Original file line number Diff line number Diff line change
Expand Up @@ -76,9 +76,6 @@ which will first group the data by the specified keys and then perform a windowi
<https://en.wikipedia.org/wiki/Kahan_summation_algorithm>`__ is used
to compute the rolling sums to preserve accuracy as much as possible.


.. versionadded:: 1.3.0

Some windowing operations also support the ``method='table'`` option in the constructor which
performs the windowing operation over an entire :class:`DataFrame` instead of a single column at a time.
This can provide a useful performance benefit for a :class:`DataFrame` with many columns
Expand All @@ -100,8 +97,6 @@ be calculated with :meth:`~Rolling.apply` by specifying a separate column of wei
df = pd.DataFrame([[1, 2, 0.6], [2, 3, 0.4], [3, 4, 0.2], [4, 5, 0.7]])
df.rolling(2, method="table", min_periods=0).apply(weighted_mean, raw=True, engine="numba") # noqa: E501

.. versionadded:: 1.3

Some windowing operations also support an ``online`` method after constructing a windowing object
which returns a new object that supports passing in new :class:`DataFrame` or :class:`Series` objects
to continue the windowing calculation with the new values (i.e. online calculations).
Expand Down Expand Up @@ -182,8 +177,6 @@ By default the labels are set to the right edge of the window, but a

This can also be applied to datetime-like indices.

.. versionadded:: 1.3.0

.. ipython:: python

df = pd.DataFrame(
Expand Down Expand Up @@ -363,11 +356,8 @@ Numba will be applied in potentially two routines:
The ``engine_kwargs`` argument is a dictionary of keyword arguments that will be passed into the
`numba.jit decorator <https://numba.readthedocs.io/en/stable/user/jit.html>`__.
These keyword arguments will be applied to *both* the passed function (if a standard Python function)
and the apply for loop over each window.

.. versionadded:: 1.3.0

``mean``, ``median``, ``max``, ``min``, and ``sum`` also support the ``engine`` and ``engine_kwargs`` arguments.
and the apply for loop over each window. ``mean``, ``median``, ``max``, ``min``, and ``sum``
also support the ``engine`` and ``engine_kwargs`` arguments.

.. _window.cov_corr:

Expand Down
4 changes: 0 additions & 4 deletions pandas/_testing/asserters.py
Original file line number Diff line number Diff line change
Expand Up @@ -931,14 +931,10 @@ def assert_series_equal(
assertion message.
check_index : bool, default True
Whether to check index equivalence. If False, then compare only values.

.. versionadded:: 1.3.0
check_like : bool, default False
If True, ignore the order of the index. Must be False if check_index is False.
Note: same labels must be with the same data.

.. versionadded:: 1.5.0

See Also
--------
testing.assert_index_equal : Check that two Indexes are equal.
Expand Down
16 changes: 8 additions & 8 deletions pandas/core/algorithms.py
Original file line number Diff line number Diff line change
Expand Up @@ -698,8 +698,7 @@ def factorize(
NaN values will be encoded as non-negative integers and will not drop the
NaN from the uniques of the values.

.. versionadded:: 1.5.0
{size_hint}\
{size_hint}

Returns
-------
Expand Down Expand Up @@ -731,7 +730,7 @@ def factorize(
``pd.factorize(values)``. The results are identical for methods like
:meth:`Series.factorize`.

>>> codes, uniques = pd.factorize(np.array(['b', 'b', 'a', 'c', 'b'], dtype="O"))
>>> codes, uniques = pd.factorize(np.array(["b", "b", "a", "c", "b"], dtype="O"))
>>> codes
array([0, 0, 1, 2, 0])
>>> uniques
Expand All @@ -740,8 +739,9 @@ def factorize(
With ``sort=True``, the `uniques` will be sorted, and `codes` will be
shuffled so that the relationship is the maintained.

>>> codes, uniques = pd.factorize(np.array(['b', 'b', 'a', 'c', 'b'], dtype="O"),
... sort=True)
>>> codes, uniques = pd.factorize(
... np.array(["b", "b", "a", "c", "b"], dtype="O"), sort=True
... )
>>> codes
array([1, 1, 0, 2, 1])
>>> uniques
Expand All @@ -751,7 +751,7 @@ def factorize(
the `codes` with the sentinel value ``-1`` and missing values are not
included in `uniques`.

>>> codes, uniques = pd.factorize(np.array(['b', None, 'a', 'c', 'b'], dtype="O"))
>>> codes, uniques = pd.factorize(np.array(["b", None, "a", "c", "b"], dtype="O"))
>>> codes
array([ 0, -1, 1, 2, 0])
>>> uniques
Expand All @@ -761,7 +761,7 @@ def factorize(
NumPy arrays). When factorizing pandas objects, the type of `uniques`
will differ. For Categoricals, a `Categorical` is returned.

>>> cat = pd.Categorical(['a', 'a', 'c'], categories=['a', 'b', 'c'])
>>> cat = pd.Categorical(["a", "a", "c"], categories=["a", "b", "c"])
>>> codes, uniques = pd.factorize(cat)
>>> codes
array([0, 0, 1])
Expand All @@ -775,7 +775,7 @@ def factorize(
For all other pandas objects, an Index of the appropriate type is
returned.

>>> cat = pd.Series(['a', 'a', 'c'])
>>> cat = pd.Series(["a", "a", "c"])
>>> codes, uniques = pd.factorize(cat)
>>> codes
array([0, 0, 1])
Expand Down
2 changes: 0 additions & 2 deletions pandas/core/arrays/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -1574,8 +1574,6 @@ def factorize(
NaN values will be encoded as non-negative integers and will not drop the
NaN from the uniques of the values.

.. versionadded:: 1.5.0

Returns
-------
codes : ndarray
Expand Down
2 changes: 0 additions & 2 deletions pandas/core/common.py
Original file line number Diff line number Diff line change
Expand Up @@ -639,8 +639,6 @@ def fill_missing_names(names: Sequence[Hashable | None]) -> list[Hashable]:
"""
If a name is missing then replace it by level_n, where n is the count

.. versionadded:: 1.4.0

Parameters
----------
names : list-like
Expand Down
Loading
Loading