Uploaded image for project: 'Hive'
  1. Hive
  2. HIVE-11470

NPE in DynamicPartFileRecordWriterContainer on null part-keys.

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Closed
    • Major
    • Resolution: Fixed
    • 1.2.0
    • 1.2.2, 2.0.0
    • HCatalog
    • None

    Description

      When partitioning data using HCatStorer, one sees the following NPE, if the dyn-part-key is of null-value:

      2015-07-30 23:59:59,627 WARN [main] org.apache.hadoop.mapred.YarnChild: Exception running child : java.io.IOException: java.lang.NullPointerException
      at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.runPipeline(PigGenericMapReduce.java:473)
      at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.processOnePackageOutput(PigGenericMapReduce.java:436)
      at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.reduce(PigGenericMapReduce.java:416)
      at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.reduce(PigGenericMapReduce.java:256)
      at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:171)
      at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:627)
      at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389)
      at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
      at java.security.AccessController.doPrivileged(Native Method)
      at javax.security.auth.Subject.doAs(Subject.java:415)
      at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1694)
      at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
      Caused by: java.lang.NullPointerException
      at org.apache.hive.hcatalog.mapreduce.DynamicPartitionFileRecordWriterContainer.getLocalFileWriter(DynamicPartitionFileRecordWriterContainer.java:141)
      at org.apache.hive.hcatalog.mapreduce.FileRecordWriterContainer.write(FileRecordWriterContainer.java:110)
      at org.apache.hive.hcatalog.mapreduce.FileRecordWriterContainer.write(FileRecordWriterContainer.java:54)
      at org.apache.hive.hcatalog.pig.HCatBaseStorer.putNext(HCatBaseStorer.java:309)
      at org.apache.hive.hcatalog.pig.HCatStorer.putNext(HCatStorer.java:61)
      at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:139)
      at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:98)
      at org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.write(ReduceTask.java:558)
      at org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
      at org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer$Context.write(WrappedReducer.java:105)
      at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.runPipeline(PigGenericMapReduce.java:471)
      ... 11 more
      

      The reason is that the DynamicPartitionFileRecordWriterContainer makes an unfortunate assumption when fetching a local file-writer instance:

      DynamicPartitionFileRecordWriterContainer.java
        @Override
        protected LocalFileWriter getLocalFileWriter(HCatRecord value) 
          throws IOException, HCatException {
          
          OutputJobInfo localJobInfo = null;
          // Calculate which writer to use from the remaining values - this needs to
          // be done before we delete cols.
          List<String> dynamicPartValues = new ArrayList<String>();
          for (Integer colToAppend : dynamicPartCols) {
            dynamicPartValues.add(value.get(colToAppend).toString()); // <-- YIKES!
          }
          ...
        }
      

      Must check for null, and substitute with "__HIVE_DEFAULT_PARTITION__", or equivalent.

      Attachments

        1. HIVE-11470.2.patch
          2 kB
          Mithun Radhakrishnan
        2. HIVE-11470.1.patch
          1 kB
          Mithun Radhakrishnan

        Activity

          People

            mithun Mithun Radhakrishnan
            mithun Mithun Radhakrishnan
            Votes:
            1 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: