Details
-
Improvement
-
Status: Resolved
-
Blocker
-
Resolution: Fixed
-
Impala 3.1.0
-
ghx-label-2
Description
A recent test job err'd out when HDFS could not be setup. From the console:
... 14:59:31 Stopping hdfs 14:59:33 Starting hdfs (Web UI - http://localhost:5070) 14:59:38 Failed to start hdfs-datanode. The end of the log (/data/jenkins/workspace/impala-asf-master-exhaustive-centos6/repos/Impala/testdata/cluster/cdh6/node-3/var/log/hdfs-datanode.out) is: 14:59:39 WARNING: /data/jenkins/workspace/impala-asf-master-exhaustive-centos6/repos/Impala/testdata/cluster/cdh6/node-3/var/log/hadoop-hdfs does not exist. Creating. 14:59:39 Failed to start hdfs-datanode. The end of the log (/data/jenkins/workspace/impala-asf-master-exhaustive-centos6/repos/Impala/testdata/cluster/cdh6/node-2/var/log/hdfs-datanode.out) is: 14:59:39 WARNING: /data/jenkins/workspace/impala-asf-master-exhaustive-centos6/repos/Impala/testdata/cluster/cdh6/node-2/var/log/hadoop-hdfs does not exist. Creating. 14:59:39 Failed to start hdfs-datanode. The end of the log (/data/jenkins/workspace/impala-asf-master-exhaustive-centos6/repos/Impala/testdata/cluster/cdh6/node-1/var/log/hdfs-datanode.out) is: 14:59:39 WARNING: /data/jenkins/workspace/impala-asf-master-exhaustive-centos6/repos/Impala/testdata/cluster/cdh6/node-1/var/log/hadoop-hdfs does not exist. Creating. 14:59:47 Namenode started 14:59:47 Error in /data/jenkins/workspace/impala-asf-master-exhaustive-centos6/repos/Impala/testdata/bin/run-mini-dfs.sh at line 41: $IMPALA_HOME/testdata/cluster/admin start_cluster 14:59:48 Error in /data/jenkins/workspace/impala-asf-master-exhaustive-centos6/repos/Impala/testdata/bin/run-all.sh at line 44: tee ${IMPALA_CLUSTER_LOGS_DIR}/run-mini-dfs.log ...
From one of the datanodes that could not start:
... 2018-08-07 14:59:38,561 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain java.lang.RuntimeException: Cannot start datanode because the configured max locked memory size (dfs.datanode.max.locked.memory) is greater than zero and native code is not available. at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1365) at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:497) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2778) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2681) at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2728) at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2872) at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2896) 2018-08-07 14:59:38,568 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1: java.lang.RuntimeException: Cannot start datanode because the configured max locked memory size (dfs.datanode.max.locked.memory) is greater than zero and native code is not available. 2018-08-07 14:59:38,575 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: