Details
-
Bug
-
Status: Open
-
Major
-
Resolution: Unresolved
-
1.5.0
-
None
-
None
-
CentOS 7
Description
Hi all, thanks for your hard work!
When upgrading to Bigtop 1.5.0 I followed the instructions for a rolling upgrade of HDFS. These instructions have one start the namenode daemon from the command line, such as this: 'hdfs dfsadmin -rollingUpgrade started' This bypasses the addition of environmental variables which happens when the namenode is started by the init script.
Specifically /etc/init.d/hadoop-hdfs overrides and adds environmental variables here:
[ -n "${BIGTOP_DEFAULTS_DIR}" -a -r ${BIGTOP_DEFAULTS_DIR}/hadoop-hdfs-namenode ] && . ${BIGTOP_DEFAULTS_DIR}/hadoop-hdfs-namenode
But if the namenode is started by the above command that sourcing never happens. (In our case the default Java heap is too small and the namenode fails to start.)
Possibly the sourcing should occur in /usr/lib/hadoop-hdfs/bin/hdfs about here:
if [ "$COMMAND" = "namenode" ] ; then
CLASS='org.apache.hadoop.hdfs.server.namenode.NameNode'
#>>> -n [ "${BIGTOP_DEFAULTS_DIR}" -a -r ${BIGTOP_DEFAULTS_DIR}/hadoop-hdfs-namenode ] && . ${BIGTOP_DEFAULTS_DIR}/hadoop-hdfs-namenode
HADOOP_OPTS="$HADOOP_OPTS $HADOOP_NAMENODE_OPTS"
This is true for the other types of HDFS daemons (datanode, journalnode...) also.
Have a good one!
C.