Details
-
Bug
-
Status: Patch Available
-
Minor
-
Resolution: Unresolved
-
None
-
None
Description
In start-all.sh, we have already default $HADOOP_HDFS_HOME and $HADOOP_YARN_HOME as $HADOOP_HOME and it works well.
- start hdfs daemons if hdfs is present
if [ -f "${HADOOP_HDFS_HOME}"/sbin/start-dfs.sh ]; then
"${HADOOP_HDFS_HOME}"/sbin/start-dfs.sh --config $HADOOP_CONF_DIR
fi - start yarn daemons if yarn is present
if [ -f "${HADOOP_YARN_HOME}"/sbin/start-yarn.sh ]; then
"${HADOOP_YARN_HOME}"/sbin/start-yarn.sh --config $HADOOP_CONF_DIR
fi
When we execute sqoop.sh, we need to use these two settings: $HADOOP_HDFS_HOME and $HADOOP_YARN_HOME. If we set them in our environment, the path should be ${HADOOP_HOME}/share/hadoop/hdfs and ${HADOOP_HOME}/share/hadoop/yarn, then it will cause start-all.sh faulure, if we set them as $HADOOP_HOME, sqoop2 start failure.
HADOOP_COMMON_HOME=${HADOOP_COMMON_HOME:-${HADOOP_HOME}/share/hadoop/common}
HADOOP_HDFS_HOME=${HADOOP_HDFS_HOME:-${HADOOP_HOME}/share/hadoop/hdfs}
HADOOP_MAPRED_HOME=${HADOOP_MAPRED_HOME:-${HADOOP_HOME}/share/hadoop/mapreduce}
HADOOP_YARN_HOME=${HADOOP_YARN_HOME:-${HADOOP_HOME}/share/hadoop/yarn}
IMO, we can do some improvement in sqoop.sh as my attached patch. Just remove the validate to check if HADOOP_COMMON_HOME and HADOOP_YARN_HOME is null.
Attachments
Attachments
Issue Links
- links to