Log for command 'load-data functional-query exhaustive'
Loading workload 'functional-query' using exploration strategy 'exhaustive'. Logging to /data/jenkins/workspace/impala-asf-master-core-data-load/repos/Impala/logs/data_loading/data-load-functional-exhaustive.log
Error loading data. The end of the log file is:
	at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213)
	at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1086)
18/06/04 21:20:29 WARN hdfs.DataStreamer: Error Recovery for BP-1407206351-127.0.0.1-1528170335185:blk_1073743620_2799 in pipeline [DatanodeInfoWithStorage[127.0.0.1:31000,DS-37cfc57c-ab39-443c-80c9-e440cb18b63d,DISK], DatanodeInfoWithStorage[127.0.0.1:31001,DS-2bc41558-4f2c-460f-ae87-5d1a6acbf42f,DISK], DatanodeInfoWithStorage[127.0.0.1:31002,DS-4ba4d3a0-af31-4eaf-b43d-89b408231481,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:31000,DS-37cfc57c-ab39-443c-80c9-e440cb18b63d,DISK]) is bad.
18/06/04 21:21:29 INFO hdfs.DataStreamer: Exception in createBlockOutputStream blk_1073743620_2799
java.io.IOException: Got error, status=ERROR, status message , ack with firstBadLink as 127.0.0.1:31002
	at org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:134)
	at org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:110)
	at org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream(DataStreamer.java:1778)
	at org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1507)
	at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1481)
	at org.apache.hadoop.hdfs.DataStreamer.processDatanodeOrExternalError(DataStreamer.java:1256)
	at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:667)
18/06/04 21:21:29 WARN hdfs.DataStreamer: Error Recovery for BP-1407206351-127.0.0.1-1528170335185:blk_1073743620_2799 in pipeline [DatanodeInfoWithStorage[127.0.0.1:31001,DS-2bc41558-4f2c-460f-ae87-5d1a6acbf42f,DISK], DatanodeInfoWithStorage[127.0.0.1:31002,DS-4ba4d3a0-af31-4eaf-b43d-89b408231481,DISK]]: datanode 1(DatanodeInfoWithStorage[127.0.0.1:31002,DS-4ba4d3a0-af31-4eaf-b43d-89b408231481,DISK]) is bad.
18/06/04 21:21:29 WARN hdfs.DataStreamer: DataStreamer Exception
java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[127.0.0.1:31001,DS-2bc41558-4f2c-460f-ae87-5d1a6acbf42f,DISK]], original=[DatanodeInfoWithStorage[127.0.0.1:31001,DS-2bc41558-4f2c-460f-ae87-5d1a6acbf42f,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
	at org.apache.hadoop.hdfs.DataStreamer.findNewDatanode(DataStreamer.java:1304)
	at org.apache.hadoop.hdfs.DataStreamer.addDatanode2ExistingPipeline(DataStreamer.java:1372)
	at org.apache.hadoop.hdfs.DataStreamer.handleDatanodeReplacement(DataStreamer.java:1598)
	at org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1499)
	at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1481)
	at org.apache.hadoop.hdfs.DataStreamer.processDatanodeOrExternalError(DataStreamer.java:1256)
	at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:667)
put: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[127.0.0.1:31001,DS-2bc41558-4f2c-460f-ae87-5d1a6acbf42f,DISK]], original=[DatanodeInfoWithStorage[127.0.0.1:31001,DS-2bc41558-4f2c-460f-ae87-5d1a6acbf42f,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
18/06/04 21:24:25 INFO hdfs.DFSClient: Could not complete /test-warehouse/testescape_17_crlf/126._COPYING_ retrying...
be loaded.
Empty base table load for chars_tiny. Skipping load generation
HDFS path: /test-warehouse/widetable_250_cols does not exists or is empty. Data will be loaded.
HDFS path: /test-warehouse/widetable_500_cols does not exists or is empty. Data will be loaded.
HDFS path: /test-warehouse/widetable_1000_cols does not exists or is empty. Data will be loaded.
Skipping 'functional.avro_decimal_tbl' due to include constraint match.
Skipping 'functional.no_avro_schema' due to include constraint match.
HDFS path: /test-warehouse/table_no_newline does not exists or is empty. Data will be loaded.
Empty base table load for table_no_newline. Skipping load generation
HDFS path: /test-warehouse/table_no_newline_part does not exists or is empty. Data will be loaded.
Empty base table load for table_no_newline_part. Skipping load generation
HDFS path: /test-warehouse/testescape_16_lf does not exists or is empty. Data will be loaded.
Empty base table load for testescape_16_lf. Skipping load generation
HDFS path: /test-warehouse/testescape_16_crlf does not exists or is empty. Data will be loaded.
Empty base table load for testescape_16_crlf. Skipping load generation
HDFS path: /test-warehouse/testescape_17_lf does not exists or is empty. Data will be loaded.
Empty base table load for testescape_17_lf. Skipping load generation
Traceback (most recent call last):
  File "/data/jenkins/workspace/impala-asf-master-core-data-load/repos/Impala/testdata/bin/generate-schema-statements.py", line 836, in <module>
    test_vectors, sections, include_constraints, exclude_constraints, only_constraints)
  File "/data/jenkins/workspace/impala-asf-master-core-data-load/repos/Impala/testdata/bin/generate-schema-statements.py", line 595, in generate_statements
    load = eval_section(section['LOAD'])
  File "/data/jenkins/workspace/impala-asf-master-core-data-load/repos/Impala/testdata/bin/generate-schema-statements.py", line 533, in eval_section
    assert p.returncode == 0
AssertionError
21:35:26 Error generating schema statements for workload: functional-query
