Details
-
Bug
-
Status: Closed
-
Blocker
-
Resolution: Fixed
-
0.19.0
-
None
-
None
-
Reviewed
Description
This happens when we terminate the JT using control-C. It throws the following exception
Exception closing file my-file java.io.IOException: Filesystem closed at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:193) at org.apache.hadoop.hdfs.DFSClient.access$700(DFSClient.java:64) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.closeInternal(DFSClient.java:2868) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.close(DFSClient.java:2837) at org.apache.hadoop.hdfs.DFSClient$LeaseChecker.close(DFSClient.java:808) at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:205) at org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:253) at org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:1367) at org.apache.hadoop.fs.FileSystem.closeAll(FileSystem.java:234) at org.apache.hadoop.fs.FileSystem$ClientFinalizer.run(FileSystem.java:219)
Note that my-file is some file used by the JT.
Also if there is some file renaming done, then the exception states that the earlier file does not exist. I am not sure if this is a MR issue or a DFS issue. Opening this issue for investigation.
Attachments
Attachments
Issue Links
- relates to
-
HADOOP-5311 Write pipeline recovery fails
- Resolved