Details
-
Task
-
Status: Resolved
-
Major
-
Resolution: Done
-
None
Description
HADOOP_CLASSPATH is the standard way to add additional jar files to the classpath of the mapreduce/spark/.. .jobs. If something is added to the HADOOP_CLASSPATH, than it should be on the classpath of the classic hadoop daemons.
But for the Ozone components we don't need any new jar files (cloud connectors, libraries). I think it's more safe to separated HADOOP_CLASSPATH from OZONE_CLASSPATH. If something is really need on the classpath for Ozone daemons the dedicated environment variable should be used.
Most probably it can be fixed in
hadoop-hdds/common/src/main/bin/hadoop-functions.sh
And the hadoop-ozone/dev/src/main/compose files also should be checked (some of them contain HADOOP_CLASSPATH
Attachments
Issue Links
- is fixed by
-
HDDS-4525 Replace Hadoop variables and functions in Ozone shell scripts with Ozone-specific ones
- Resolved