Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-11873

Include disk read/write time in FileSystem.Statistics

    XMLWordPrintableJSON

Details

    • New Feature
    • Status: Open
    • Minor
    • Resolution: Unresolved
    • None
    • None
    • metrics
    • None

    Description

      Measuring the time spent blocking on reading / writing data from / to disk is very useful for debugging performance problems in applications that read data from Hadoop, and can give much more information (e.g., to reflect disk contention) than just knowing the total amount of data read. I'd like to add something like "diskMillis" to FileSystem#Statistics to track this.

      For data read from HDFS, this can be done with very low overhead by adding logging around calls to RemoteBlockReader2.readNextPacket (because this reads larger chunks of data, the time added by the instrumentation is very small relative to the time to actually read the data). For data written to HDFS, this can be done in DFSOutputStream.waitAndQueueCurrentPacket.

      As far as I know, if you want this information today, it is only currently accessible by turning on HTrace. It looks like HTrace can't be selectively enabled, so a user can't just turn on the tracing on RemoteBlockReader2.readNextPacket for example, and instead needs to turn on tracing everywhere (which then introduces a bunch of overhead – so sampling is necessary). It would be hugely helpful to have native metrics for time reading / writing to disk that are sufficiently low-overhead to be always on. (Please correct me if I'm wrong here about what's possible today!)

      Attachments

        Activity

          People

            Unassigned Unassigned
            kayousterhout Kay Ousterhout
            Votes:
            0 Vote for this issue
            Watchers:
            17 Start watching this issue

            Dates

              Created:
              Updated: