Details
-
Bug
-
Status: Closed
-
Major
-
Resolution: Fixed
-
None
-
None
-
None
Description
I encountered this exception during one of the randomwriter runs. I think this situation can be improved by using org.apache.hadoop.fs.LocalDirAllocator that has been written to handle these kind of problems. I set the fix version as 0.14 but wonder whether it makes sense to have it in 0.13 itself (since the amount of code change would not be much).
java.io.FileNotFoundException: /local/dfs/data/tmp/client-1299146109450372217 (Read-only file system)
at java.io.FileOutputStream.open(Native Method)
at java.io.FileOutputStream.(FileOutputStream.java:179)
at java.io.FileOutputStream.(FileOutputStream.java:131)
at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.endBlock(DFSClient.java:1356)
at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.flush(DFSClient.java:1273)
at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.write(DFSClient.java:1255)
at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:38)
at java.io.BufferedOutputStream.write(BufferedOutputStream.java:105)
at java.io.DataOutputStream.write(DataOutputStream.java:90)
at org.apache.hadoop.fs.ChecksumFileSystem$FSOutputSummer.write(ChecksumFileSystem.java:402)
at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:38)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
at java.io.BufferedOutputStream.write(BufferedOutputStream.java:109)
at java.io.DataOutputStream.write(DataOutputStream.java:90)
at org.apache.hadoop.io.SequenceFile$Writer.append(SequenceFile.java:775)
at org.apache.hadoop.examples.RandomWriter$Map.map(RandomWriter.java:152)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:48)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:187)
at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:1709)
Attachments
Attachments
Issue Links
- duplicates
-
HADOOP-1331 Multiple entries for 'dfs.client.buffer.dir'
- Closed