Details
-
Bug
-
Status: Closed
-
Blocker
-
Resolution: Fixed
-
0.17.1
-
None
-
None
-
0.17.2 (0.17.1-H3002-H3633-H3681-H3685-H3370-H3707-H3760-H3758)
-
Reviewed
-
Allows the user to change the maximum number of xceivers in the datanode.
Description
After fixing Hadoop-3633, some users started seeing their tasks fail with
08/07/29 05:13:07 INFO mapred.JobClient: Task Id : task_200807290511_0001_m_000846_0, Status : FAILED java.io.IOException: Could not obtain block: blk_-7893038518783920880 file=/tmp/files111 at org.apache.hadoop.dfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1430) at org.apache.hadoop.dfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1281) at org.apache.hadoop.dfs.DFSClient$DFSInputStream.read(DFSClient.java:1385) at java.io.DataInputStream.read(DataInputStream.java:83) at org.apache.hadoop.mapred.LineRecordReader$LineReader.backfill(LineRecordReader.java:88) at org.apache.hadoop.mapred.LineRecordReader$LineReader.readLine(LineRecordReader.java:114) at org.apache.hadoop.mapred.LineRecordReader.<init>(LineRecordReader.java:179) at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:50) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:211) at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:2122)
This happened when hundreds of mappers pulled the same file concurrently.
Attachments
Attachments
Issue Links
- is related to
-
HDFS-223 Asynchronous IO Handling in Hadoop and HDFS
- Open