Uploaded image for project: 'Apache Arrow'
  1. Apache Arrow
  2. ARROW-567

[C++] File and stream APIs for interacting with "large" schemas

Add voteWatch issue
    XMLWordPrintableJSON

Details

    • New Feature
    • Status: Open
    • Major
    • Resolution: Unresolved
    • None
    • None
    • C++
    • None

    Description

      For data where the metadata itself is large (> 10000 fields), doing a full in-memory reconstruction of a record batch may be impractical if the user's goal is to do random access on a potentially small subset of a batch.

      I propose adding an API that enables "cheap" inspection of the record batch metadata and reconstruction of fields.

      Because of the flattened buffer and field metadata, at the moment the complexity of random field access will scale with the number of fields – in the future we may devise strategies to mitigate this (e.g. storing a pre-computed buffer/field lookup table in the schema)

      Attachments

        Activity

          People

            Unassigned Unassigned
            wesm Wes McKinney

            Dates

              Created:
              Updated:

              Slack

                Issue deployment