Description
Code Sample, a copy-pastable example if possible
import time
import pandas as pd
import numpy as np
df = pd.DataFrame({
"a": np.empty(10000000),
"b": np.empty(10000000),
})
df["a"] = df["a"].astype("object")
s = time.time()
mem = df.memory_usage(deep=True)
print("memory_usage(deep=True) took %.4fsecs" % (time.time() - s))
Problem description
Performance of memory_usage(deep=True)
on object
columns seems to have regressed significantly since v0.23.4. Once in v0.24.0, and another regression in v1.0.0 that remains in v1.0.3.
Output
v1.0.3
memory_usage(deep=True) took 26.4566secs
v0.24.0
memory_usage(deep=True) took 6.0479secs
v0.23.4
memory_usage(deep=True) took 0.4633secs
Removing df["a"] = df["a"].astype("object")
reverts it back to the expected magnitude of speed in v1.0.3:
memory_usage(deep=True) took 0.0024secs
Output of pd.show_versions()
INSTALLED VERSIONS
commit : None
python : 3.6.5.final.0
python-bits : 64
OS : Linux
OS-release : 3.16.0-77-generic
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_AU.UTF-8
LOCALE : en_AU.UTF-8
pandas : 1.0.3
numpy : 1.18.2
pytz : 2019.3
dateutil : 2.8.1
pip : 20.0.2
setuptools : 46.1.1
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : None
IPython : None
pandas_datareader: None
bs4 : None
bottleneck : None
fastparquet : None
gcsfs : None
lxml.etree : None
matplotlib : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pytables : None
pytest : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
xlsxwriter : None
numba : None