Hadoop HDFS
  1. Hadoop HDFS
  2. HDFS-347

DFS read performance suboptimal when client co-located on nodes with data

    Details

    • Target Version/s:
    • Hadoop Flags:
      Reviewed

      Description

      One of the major strategies Hadoop uses to get scalable data processing is to move the code to the data. However, putting the DFS client on the same physical node as the data blocks it acts on doesn't improve read performance as much as expected.

      After looking at Hadoop and O/S traces (via HADOOP-4049), I think the problem is due to the HDFS streaming protocol causing many more read I/O operations (iops) than necessary. Consider the case of a DFSClient fetching a 64 MB disk block from the DataNode process (running in a separate JVM) running on the same machine. The DataNode will satisfy the single disk block request by sending data back to the HDFS client in 64-KB chunks. In BlockSender.java, this is done in the sendChunk() method, relying on Java's transferTo() method. Depending on the host O/S and JVM implementation, transferTo() is implemented as either a sendfilev() syscall or a pair of mmap() and write(). In either case, each chunk is read from the disk by issuing a separahitting te I/O operation for each chunk. The result is that the single request for a 64-MB block ends up the disk as over a thousand smaller requests for 64-KB each.

      Since the DFSClient runs in a different JVM and process than the DataNode, shuttling data from the disk to the DFSClient also results in context switches each time network packets get sent (in this case, the 64-kb chunk turns into a large number of 1500 byte packet send operations). Thus we see a large number of context switches for each block send operation.

      I'd like to get some feedback on the best way to address this, but I think providing a mechanism for a DFSClient to directly open data blocks that happen to be on the same machine. It could do this by examining the set of LocatedBlocks returned by the NameNode, marking those that should be resident on the local host. Since the DataNode and DFSClient (probably) share the same hadoop configuration, the DFSClient should be able to find the files holding the block data, and it could directly open them and send data back to the client. This would avoid the context switches imposed by the network layer, and would allow for much larger read buffers than 64KB, which should reduce the number of iops imposed by each read block operation.

      1. local-reads-doc
        14 kB
        Todd Lipcon
      2. hdfs-347-merge.txt
        346 kB
        Todd Lipcon
      3. hdfs-347-merge.txt
        349 kB
        Todd Lipcon
      4. hdfs-347-merge.txt
        357 kB
        Todd Lipcon
      5. HDFS-347-branch-20-append.txt
        27 kB
        ryan rawson
      6. HDFS-347-016_cleaned.patch
        109 kB
        Colin Patrick McCabe
      7. hdfs-347.txt
        26 kB
        Todd Lipcon
      8. hdfs-347.png
        16 kB
        Todd Lipcon
      9. HDFS-347.035.patch
        343 kB
        Colin Patrick McCabe
      10. HDFS-347.033.patch
        321 kB
        Colin Patrick McCabe
      11. HDFS-347.030.patch
        260 kB
        Colin Patrick McCabe
      12. HDFS-347.029.patch
        259 kB
        Colin Patrick McCabe
      13. HDFS-347.027.patch
        261 kB
        Colin Patrick McCabe
      14. HDFS-347.026.patch
        240 kB
        Colin Patrick McCabe
      15. HDFS-347.025.patch
        240 kB
        Colin Patrick McCabe
      16. HDFS-347.024.patch
        240 kB
        Colin Patrick McCabe
      17. HDFS-347.022.patch
        249 kB
        Colin Patrick McCabe
      18. HDFS-347.021.patch
        248 kB
        Colin Patrick McCabe
      19. HDFS-347.020.patch
        245 kB
        Colin Patrick McCabe
      20. HDFS-347.019.patch
        247 kB
        Colin Patrick McCabe
      21. HDFS-347.018.patch2
        246 kB
        Colin Patrick McCabe
      22. HDFS-347.018.clean.patch
        112 kB
        Colin Patrick McCabe
      23. HDFS-347.017.patch
        247 kB
        Colin Patrick McCabe
      24. HDFS-347.017.clean.patch
        113 kB
        Colin Patrick McCabe
      25. HDFS-347.016.patch
        250 kB
        Colin Patrick McCabe
      26. HADOOP-4801.3.patch
        15 kB
        George Porter
      27. HADOOP-4801.2.patch
        15 kB
        George Porter
      28. HADOOP-4801.1.patch
        14 kB
        George Porter
      29. full.patch
        379 kB
        Colin Patrick McCabe
      30. BlockReaderLocal1.txt
        28 kB
        dhruba borthakur
      31. all.tsv
        12 kB
        Todd Lipcon
      32. a.patch
        413 kB
        Tsz Wo Nicholas Sze
      33. 2013-04-01-jenkins.patch
        424 kB
        Colin Patrick McCabe
      34. 2013.02.15.consolidated4.patch
        366 kB
        Colin Patrick McCabe
      35. 2013.01.31.consolidated2.patch
        402 kB
        Colin Patrick McCabe
      36. 2013.01.31.consolidated.patch
        402 kB
        Colin Patrick McCabe
      37. 2013.01.28.design.pdf
        72 kB
        Colin Patrick McCabe

        Issue Links

        There are no Sub-Tasks for this issue.

          Activity

            People

            • Assignee:
              Colin Patrick McCabe
              Reporter:
              George Porter
            • Votes:
              12 Vote for this issue
              Watchers:
              112 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development