Uploaded image for project: 'Hive'
  1. Hive
  2. HIVE-11583

When PTF is used over a large partitions result could be corrupted

    Details

    • Type: Bug
    • Status: Closed
    • Priority: Critical
    • Resolution: Fixed
    • Affects Version/s: 0.14.0, 0.13.1, 0.14.1, 1.0.0, 1.2.0, 1.2.1
    • Fix Version/s: 1.3.0, 2.0.0
    • Component/s: PTF-Windowing
    • Labels:
      None
    • Environment:

      Hadoop 2.6 + Apache hive built from trunk

      Description

      Dataset:
      Window has 50001 record (2 blocks on disk and 1 block in memory)
      Size of the second block is >32Mb (2 splits)

      Result:
      When the last block is read from the disk only first split is actually loaded. The second split gets missed. The total count of the result dataset is correct, but some records are missing and another are duplicated.

      Example:

      CREATE TABLE ptf_big_src (
        id INT,
        key STRING,
        grp STRING,
        value STRING
      ) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t';
      
      LOAD DATA LOCAL INPATH '../../data/files/ptf_3blocks.txt.gz' OVERWRITE INTO TABLE ptf_big_src;
      
      SELECT grp, COUNT(1) cnt FROM ptf_big_trg GROUP BY grp ORDER BY cnt desc;
      ---
      -- A	25000
      -- B	20000
      -- C	5001
      ---
      
      CREATE TABLE ptf_big_trg AS SELECT *, row_number() OVER (PARTITION BY key ORDER BY grp) grp_num FROM ptf_big_src;
      
      SELECT grp, COUNT(1) cnt FROM ptf_big_trg GROUP BY grp ORDER BY cnt desc;
      -- 
      -- A	34296
      -- B	15704
      -- C	1
      ---
      

      Counts by 'grp' are incorrect!

        Attachments

          Issue Links

            Activity

              People

              • Assignee:
                yalovyyi Illya Yalovyy
                Reporter:
                yalovyyi Illya Yalovyy
              • Votes:
                0 Vote for this issue
                Watchers:
                5 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: