Uploaded image for project: 'Hive'
  1. Hive
  2. HIVE-968

map join may lead to very large files

    XMLWordPrintableJSON

Details

    • New Feature
    • Status: Closed
    • Major
    • Resolution: Fixed
    • None
    • 0.5.0
    • Query Processor
    • None
    • Reviewed

    Description

      If the table under consideration is a very large file, it may lead to very large files on the mappers.
      The job may never complete, and the files will never be cleaned from the tmp directory.
      It would be great if the table can be placed in the distributed cache, but minimally the following should be added:

      If the table (source) being joined leads to a very big file, it should just throw an error.
      New configuration parameters can be added to limit the number of rows or for the size of the table.
      The mapper should not be tried 4 times, but it should fail immediately.

      I cant think of any better way for the mapper to communicate with the client, but for it to write in some well known
      hdfs file - the client can read the file periodically (while polling), and if sees an error can just kill the job, but with
      appropriate error messages indicating to the client why the job died.

      Attachments

        1. HIVE-968_2.patch
          22 kB
          Ning Zhang
        2. HIVE-968_3.patch
          39 kB
          Ning Zhang
        3. HIVE-968_4.patch
          45 kB
          Ning Zhang
        4. HIVE-968.patch
          13 kB
          Ning Zhang

        Activity

          People

            nzhang Ning Zhang
            namit Namit Jain
            Votes:
            0 Vote for this issue
            Watchers:
            0 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: