The following code illustrates the issue at hand
This code is run as a single map-only task with an input file on disk and map-output to disk.
In the data node disk access patterns, the following consistent pattern was observed irrespective of bufferSize provided.
Here fd 58 is the incoming socket, 107 is the blk file and 108 is the .meta file.
The DFS packet size ignores the bufferSize argument and suffers from suboptimal syscall & disk performance because of the default 64kb value, as is obvious from the interrupted read/write operations.
Changing the packet size to a more optimal 1056405 bytes results in a decent spike in performance, by cutting down on disk & network iops.
That is by average an increase from ~115 MB/s to ~130 MB/s, by modifying the global packet size setting.
This suggests that there is value in adapting the user provided buffer sizes to hadoop packet sizing, per stream.
|Field||Original Value||New Value|
|Attachment||gistfe319436b880026cbad4-aad495d50e0d6b538831327752b984e0fdcc74db.tar.gz [ 12549672 ]|
|Status||Open [ 1 ]||Patch Available [ 10002 ]|
|Release Note||Allow write packet sizes to be configurable for DFS input streams|
|Status||Patch Available [ 10002 ]||Open [ 1 ]|
|Transition||Time In Source Status||Execution Times||Last Executer||Last Execution Date|
|7d 5h 15m||1||Gopal V||24/Oct/12 16:22|
|16h 30m||1||Gopal V||25/Oct/12 08:52|