DataFrame.memory_usage(index=True, deep=False)
[source]
Return the memory usage of each column in bytes.
The memory usage can optionally include the contribution of the index and elements of object
dtype.
This value is displayed in DataFrame.info
by default. This can be suppressed by setting pandas.options.display.memory_usage
to False.
Parameters: |
index : bool, default True Specifies whether to include the memory usage of the DataFrame’s index in returned Series. If deep : bool, default False If True, introspect the data deeply by interrogating |
---|---|
Returns: |
sizes : Series A Series whose index is the original column names and whose values is the memory usage of each column in bytes. |
See also
numpy.ndarray.nbytes
Series.memory_usage
pandas.Categorical
DataFrame.info
>>> dtypes = ['int64', 'float64', 'complex128', 'object', 'bool'] >>> data = dict([(t, np.ones(shape=5000).astype(t)) ... for t in dtypes]) >>> df = pd.DataFrame(data) >>> df.head() int64 float64 complex128 object bool 0 1 1.0 (1+0j) 1 True 1 1 1.0 (1+0j) 1 True 2 1 1.0 (1+0j) 1 True 3 1 1.0 (1+0j) 1 True 4 1 1.0 (1+0j) 1 True
>>> df.memory_usage() Index 80 int64 40000 float64 40000 complex128 80000 object 40000 bool 5000 dtype: int64
>>> df.memory_usage(index=False) int64 40000 float64 40000 complex128 80000 object 40000 bool 5000 dtype: int64
The memory footprint of object
dtype columns is ignored by default:
>>> df.memory_usage(deep=True) Index 80 int64 40000 float64 40000 complex128 80000 object 160000 bool 5000 dtype: int64
Use a Categorical for efficient storage of an object-dtype column with many repeated values.
>>> df['object'].astype('category').memory_usage(deep=True) 5168
© 2008–2012, AQR Capital Management, LLC, Lambda Foundry, Inc. and PyData Development Team
Licensed under the 3-clause BSD License.
http://pandas.pydata.org/pandas-docs/version/0.23.4/generated/pandas.DataFrame.memory_usage.html