Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
1.0.0
-
None
Description
Create the following ProcessGroups
GetFile --> PutHdfs --> PutFile
ListHDFS --> FetchHdfs --> putFile
2. Now start both the processGroups
3. Write lots of files into HDFS so that ListHDFS keeps listing and FetchHdfs fetches.
4. An exception is thrown because the processor reads the part file from the putHdfs folder
java.io.FileNotFoundException: File does not exist: /tmp/HDFSProcessorsTest_visjJMcHORUwigw/.ycnVSpBOzEaoTWk_7f37d5af-d4a4-4521-b60d-c3c11ae19669 at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:71) at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:61) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1860) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1831) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1744)
Note that eventually the file is copied to the output successfully, but at the same time there are some files in the failure/comms failure relationship