Uploaded image for project: 'Hive'
  1. Hive
  2. HIVE-3387

meta data file size exceeds limit

    XMLWordPrintableJSON

    Details

    • Type: Bug
    • Status: Closed
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: 0.7.1
    • Fix Version/s: 0.10.0
    • Component/s: None
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      The cause is certainly that we use an array list instead of a set structure in the split locations API. Looks like a bug in Hive's CombineFileInputFormat.

      Reproduce:
      Set mapreduce.jobtracker.split.metainfo.maxsize=100000000 when submitting the Hive query. Run a big hive query that write data into a partitioned table. Due to the large number of splits, you encounter an exception on the job submitted to Hadoop and the exception said:

      meta data size exceeds 100000000.

        Attachments

        1. HIVE-3387.1.patch.txt
          3 kB
          Navis Ryu

          Issue Links

            Activity

              People

              • Assignee:
                navis Navis Ryu
                Reporter:
                alo.alt Alexander Alten-Lorenz
              • Votes:
                0 Vote for this issue
                Watchers:
                8 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: