Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-5459

CRC errors not detected reading intermediate output into memory with problematic length

    Details

    • Type: Bug
    • Status: Closed
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: 0.20.0
    • Fix Version/s: 0.20.0
    • Component/s: None
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      It's possible that the expected, uncompressed length of the segment is less than the available/decompressed data. This can happen in some worst-cases for compression, but it is exceedingly rare. It is also possible (though also fantastically unlikely) for the data to deflate to a size greater than that reported by the map. CRC errors will remain undetected because IFileInputStream does not validate the checksum until the end of the stream, and close() does not advance the stream to the end of the segment. The (abbreviated) read loop fetching data in shuffleInMemory:

      int n = input.read(shuffleData, 0, shuffleData.length);
      while (n > 0) { 
        bytesRead += n;
        n = input.read(shuffleData, bytesRead, 
                       (shuffleData.length-bytesRead));
      } 
      

      Will read only up to the expected length. Without reading the whole segment, the checksum is not validated. Even if IFileInputStream instances are closed, they should always validate checksums.

        Attachments

        1. 5459-1.patch
          5 kB
          Chris Douglas
        2. 5459-0.patch
          1 kB
          Chris Douglas

          Activity

            People

            • Assignee:
              chris.douglas Chris Douglas
              Reporter:
              chris.douglas Chris Douglas
            • Votes:
              0 Vote for this issue
              Watchers:
              0 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: