- Basic roundtrip based on the pandas_metadata (in to_pandas, we check if the pandas_metadata specify pandas extension dtypes, and if so, use this as the target dtype for that column)
- Conversion for pyarrow extension types that can define their equivalent pandas extension dtype
- A way to override default conversion (eg for the built-in types, or in absence of pandas_metadata in the schema). This would require the user to be able to specify some mapping of pyarrow type or column name to the pandas extension dtype to use.
I think it is still interesting to also cover the third case in some way.
An example use case are the new nullable dtypes that are introduced in pandas (eg the nullable integer dtype). Assume I want to read a parquet file into a pandas DataFrame using this nullable integer dtype. The pyarrow Table has no pandas_metadata indicating to use this dtype (unless it was created from a pandas DataFrame that was already using this dtype, but that will often not be the case), and the pyarrow.int64() type is also not an extension type that can define its equivalent pandas extension dtype.
Currently, the only solution is first read it into pandas DataFrame (which will use floats for the integers if there are nulls), and then afterwards to convert those floats back to a nullable integer dtype.
A possible API for this could look like:
to indicate that you want to convert all columns of the pyarrow table with int64 type to a pandas column using the nullable Int64 dtype.