Details
-
Bug
-
Status: Closed
-
Major
-
Resolution: Fixed
-
0.3.2
-
None
-
None
Description
When a data node disk is almost full the name node still assigns blocks to the data node.
By the time the data node actually tries to write that data to disk the disk may become full.
Current implementation forces the data node to shutdown after that.
The expected behavior is to report the block write failure and continue.
The Exception looks as follows:
java.io.IOException: No space left on device
at java.io.FileOutputStream.writeBytes(Native Method)
at java.io.FileOutputStream.write(FileOutputStream.java:260)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
at java.io.BufferedOutputStream.write(BufferedOutputStream.java:109)
at java.io.DataOutputStream.write(DataOutputStream.java:90)
at org.apache.hadoop.dfs.DataNode$DataXceiver.writeBlock(DataNode.java:623)
at org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:410)
at java.lang.Thread.run(Thread.java:595)
2006-06-26 08:26:04,751 INFO org.apache.hadoop.dfs.DataNode: Finishing DataNode in: /tmp/hadoop/dfs/data/data
Attachments
Attachments
Issue Links
- relates to
-
HADOOP-336 The task tracker should track disk space used, and have a configurable cap
- Closed