Details
-
Bug
-
Status: Closed
-
Major
-
Resolution: Fixed
-
0.6.2
-
None
-
None
Description
Some of my applications using Hadoop DFS receive wrong data after certain random seeks. After some investigation I believe (without looking at source code of java.io.BufferedInputStream) that it basically boils down to the fact that the method
read(byte[] b, int off, int len), when called with an external buffer larger than the internal buffer, reads into the external buffer directly without using the internal buffer anymore, but without invalidating the internal buffer by setting the variable 'count' to 0 such that a subsequent seek to an offset which is closer to the 'position' of the Positioncache than the internal buffersize will put the current position into the internal buffer containing outdated data from somewhere else.