Details
-
Bug
-
Status: Closed
-
Minor
-
Resolution: Fixed
-
None
-
None
Description
In ByteBufferReadable, API doc of int read(ByteBuffer buf) says:
After a successful call, buf.position() and buf.limit() should be unchanged, and therefore any data can be immediately read from buf. buf.mark() may be cleared or updated.
@param buf
the ByteBuffer to receive the results of the read operation. Up to
buf.limit() - buf.position() bytes may be read.
But actually the implementations (e.g. DFSInputStream, RemoteBlockReader2) would be:
Upon return, buf.position() will be advanced by the number of bytes read.
code implementation of RemoteBlockReader2 is as following:
@Override public int read(ByteBuffer buf) throws IOException { if (curDataSlice == null || curDataSlice.remaining() == 0 && bytesNeededToFinish > 0) { readNextPacket(); } if (curDataSlice.remaining() == 0) { // we're at EOF now return -1; } int nRead = Math.min(curDataSlice.remaining(), buf.remaining()); ByteBuffer writeSlice = curDataSlice.duplicate(); writeSlice.limit(writeSlice.position() + nRead); buf.put(writeSlice); curDataSlice.position(writeSlice.position()); return nRead; }
This description is very important and will guide user how to use this API, and all the implementations should keep the same behavior. We should fix the javadoc.
Attachments
Attachments
Issue Links
- is duplicated by
-
HADOOP-11434 Correct the comment of ByteBufferReadable#read
- Resolved