Uploaded image for project: 'Hadoop Distributed Data Store'
  1. Hadoop Distributed Data Store
  2. HDDS-2218

Use OZONE_CLASSPATH instead of HADOOP_CLASSPATH

    XMLWordPrintableJSON

    Details

    • Type: Task
    • Status: Open
    • Priority: Major
    • Resolution: Unresolved
    • Affects Version/s: None
    • Fix Version/s: None
    • Component/s: docker
    • Labels:
    • Target Version/s:

      Description

      HADOOP_CLASSPATH is the standard way to add additional jar files to the classpath of the mapreduce/spark/.. .jobs. If something is added to the HADOOP_CLASSPATH, than it should be on the classpath of the classic hadoop daemons.

      But for the Ozone components we don't need any new jar files (cloud connectors, libraries). I think it's more safe to separated HADOOP_CLASSPATH from OZONE_CLASSPATH. If something is really need on the classpath for Ozone daemons the dedicated environment variable should be used.

       

      Most probably it can be fixed in

      hadoop-hdds/common/src/main/bin/hadoop-functions.sh

      And the hadoop-ozone/dev/src/main/compose files also should be checked (some of them contain HADOOP_CLASSPATH

        Attachments

          Activity

            People

            • Assignee:
              Sandeep Nemuri Sandeep Nemuri
              Reporter:
              elek Marton Elek
            • Votes:
              1 Vote for this issue
              Watchers:
              1 Start watching this issue

              Dates

              • Created:
                Updated: