Uploaded image for project: 'Apache Arrow'
  1. Apache Arrow
  2. ARROW-13024

[C++][Parquet] Decoding byte stream split encoded columns fails when it has null values

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Open
    • Major
    • Resolution: Unresolved
    • 2.0.0, 3.0.0, 4.0.0
    • None
    • C++, Parquet
    • None

    Description

      Reading from a parquet file fails with the following error

      Data size too small for number of values (corrupted file?).

      This happens for the case when there is a BYTE_STREAM_SPLIT-encoded column which has less values stored than number of rows, which is the case when the column has null values (definition levels are present).

      The problematic part is the condition checked in ByteStreamSplitDecoder<DType>::SetData, which raises the error if the number of values does not match the size of the data array.

      I'm unsure whether I have enough experience with the internals of the encoding/decoding part of this implementation to fix this issue, but my suggestion would be to initialize num_values_in_buffer_ with len/static_cast<int64_t>(sizeof(T)).

      Attachments

        Activity

          People

            Unassigned Unassigned
            romankarlstetter Roman Karlstetter
            Votes:
            2 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

              Created:
              Updated: