Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-5459

CRC errors not detected reading intermediate output into memory with problematic length

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Closed
    • Major
    • Resolution: Fixed
    • 0.20.0
    • 0.20.0
    • None
    • None
    • Reviewed

    Description

      It's possible that the expected, uncompressed length of the segment is less than the available/decompressed data. This can happen in some worst-cases for compression, but it is exceedingly rare. It is also possible (though also fantastically unlikely) for the data to deflate to a size greater than that reported by the map. CRC errors will remain undetected because IFileInputStream does not validate the checksum until the end of the stream, and close() does not advance the stream to the end of the segment. The (abbreviated) read loop fetching data in shuffleInMemory:

      int n = input.read(shuffleData, 0, shuffleData.length);
      while (n > 0) { 
        bytesRead += n;
        n = input.read(shuffleData, bytesRead, 
                       (shuffleData.length-bytesRead));
      } 
      

      Will read only up to the expected length. Without reading the whole segment, the checksum is not validated. Even if IFileInputStream instances are closed, they should always validate checksums.

      Attachments

        1. 5459-0.patch
          1 kB
          Christopher Douglas
        2. 5459-1.patch
          5 kB
          Christopher Douglas

        Activity

          People

            cdouglas Christopher Douglas
            cdouglas Christopher Douglas
            Votes:
            0 Vote for this issue
            Watchers:
            0 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: