Uploaded image for project: 'Pig'
  1. Pig
  2. PIG-2339

HCatLoader loads all the partitions in a partitioned table even though a filter clause on the partitions is specified in the Pig script

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Closed
    • Major
    • Resolution: Fixed
    • 0.9.0
    • 0.9.2, 0.10.0, 0.11
    • None
    • None
    • Reviewed
    • HCatalog, Pig

    Description

      A table created by HCAT has the following partitions;

      hcat -e "show partitions paritionedtable"

      grid=AB/dt=2011_07_01
      grid=AB/dt=2011_07_02
      grid=AB/dt=2011_07_03
      grid=XY/dt=2011_07_01
      grid=XY/dt=2011_07_02
      grid=XY/dt=2011_07_03
      grid=XY/dt=2011_07_04
      ...

      The total number of partitions in the table is around 3200.

      A Pig script of this nature tries to access this data using the partitions in it's filter.

      {script}
      A = LOAD 'paritionedtable' USING org.apache.hcatalog.pig.HCatLoader();
      B = FILTER A BY grid=='AB' AND dt=='2011_07_04';
      C = LIMIT B 10;
      store C into 'HCAT' using PigStorage();{script}

      This script, fails to run as the job.xml generated by Pig is so large (8MB), that the Hadoop Fred's limitation does not allow it to submit the job.

      After debugging it was found that in the HCatTableInfo class the function gets a null filter value. getInputTableInfo(filter=null ..)

      I suspect that "setPartitionFilter" function in Pig does not pass the filter correctly to the HCatLoader. This is happening with both Pig 0.9 and 0.8

      Viraj

      Attachments

        1. PIG-2339-2.patch
          4 kB
          Daniel Dai
        2. PIG-2339-1.patch
          6 kB
          Daniel Dai

        Issue Links

          Activity

            People

              daijy Daniel Dai
              viraj Viraj Bhat
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: