Details
-
New Feature
-
Status: Open
-
Major
-
Resolution: Unresolved
-
None
-
None
-
None
Description
For data where the metadata itself is large (> 10000 fields), doing a full in-memory reconstruction of a record batch may be impractical if the user's goal is to do random access on a potentially small subset of a batch.
I propose adding an API that enables "cheap" inspection of the record batch metadata and reconstruction of fields.
Because of the flattened buffer and field metadata, at the moment the complexity of random field access will scale with the number of fields – in the future we may devise strategies to mitigate this (e.g. storing a pre-computed buffer/field lookup table in the schema)