We working on parquet files that involve nested lists. At most they are multi-dimensional lists of simple types (never structs), but i understand, for Parquet, they're still nested columns and involve repetition levels.
Some of these columns hold lists of rather large byte arrays (that dominate the overall size of the row). When we bump the `row_group_size` to above 16MB we see:
I conclude it's this bit complaining:
This appears to happen in the callstack of ColumnReader::ColumnReaderImpl::NextBatch
and it appears to be provoked by this constant:
Which appears to imply that the column chunk data, if larger than kBinaryChunksize (hardcoded to 16MB), is returned as a Datum::CHUNKED_ARRAY of more than one (16MB) chunks. Which ultimatelly leads to the Status::NotImplemented error.
We have no influence over what data we ingest, we have some influence in how we flatten it and we need to tune our row_group_size to something sensibly larger than 16MB.
We have see no obvious workaround for this and so we need to ask (1) if the above diagnosis appears to correct (2) do people see any sensible workarounds (3) is there an imminent intention to fix this in the Arrow community and if not, how difficult would it be to fix this (in case we can afford helping)