Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-1622

Hadoop should provide a way to allow the user to specify jar file(s) the user job depends on

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Closed
    • Major
    • Resolution: Fixed
    • None
    • 0.17.0
    • None
    • None
    • Hide
      This patch allows new command line options for

      hadoop jar
      which are

      hadoop jar -files <comma seperated list of files> -libjars <comma seperated list of jars> -archives <comma seperated list of archives>

      -files options allows you to speficy comma seperated list of path which would be present in your current working directory of your task
      -libjars option allows you to add jars to the classpaths of the maps and reduces.
      -archives allows you to pass archives as arguments that are unzipped/unjarred and a link with name of the jar/zip are created in the current working directory if tasks.
      Show
      This patch allows new command line options for hadoop jar which are hadoop jar -files <comma seperated list of files> -libjars <comma seperated list of jars> -archives <comma seperated list of archives> -files options allows you to speficy comma seperated list of path which would be present in your current working directory of your task -libjars option allows you to add jars to the classpaths of the maps and reduces. -archives allows you to pass archives as arguments that are unzipped/unjarred and a link with name of the jar/zip are created in the current working directory if tasks.

    Description

      More likely than not, a user's job may depend on multiple jars.
      Right now, when submitting a job through bin/hadoop, there is no way for the user to specify that.
      A walk around for that is to re-package all the dependent jars into a new jar or put the dependent jar files in the lib dir of the new jar.
      This walk around causes unnecessary inconvenience to the user. Furthermore, if the user does not own the main function
      (like the case when the user uses Aggregate, or datajoin, streaming), the user has to re-package those system jar files too.
      It is much desired that hadoop provides a clean and simple way for the user to specify a list of dependent jar files at the time
      of job submission. Someting like:

      bin/hadoop .... --depending_jars j1.jar:j2.jar

      Attachments

        1. multipleJobJars.patch
          8 kB
          Dennis Kubes
        2. multipleJobResources.patch
          43 kB
          Dennis Kubes
        3. multipleJobResources2.patch
          44 kB
          Dennis Kubes
        4. hadoop-1622-4-20071008.patch
          48 kB
          Dennis Kubes
        5. HADOOP-1622-5.patch
          46 kB
          Doug Cutting
        6. HADOOP-1622-6.patch
          46 kB
          Doug Cutting
        7. HADOOP-1622-7.patch
          44 kB
          Doug Cutting
        8. HADOOP-1622-8.patch
          45 kB
          Dennis Kubes
        9. HADOOP-1622-9.patch
          46 kB
          Dennis Kubes
        10. HADOOP-1622_1.patch
          20 kB
          Mahadev Konar
        11. HADOOP-1622_2.patch
          28 kB
          Mahadev Konar
        12. HADOOP-1622_3.patch
          29 kB
          Mahadev Konar
        13. HADOOP-1622_4.patch
          28 kB
          Mahadev Konar
        14. HADOOP-1622_5.patch
          30 kB
          Mahadev Konar
        15. HADOOP-1622_6.patch
          30 kB
          Mahadev Konar

        Issue Links

          Activity

            People

              mahadev Mahadev Konar
              runping Runping Qi
              Votes:
              0 Vote for this issue
              Watchers:
              9 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: