Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-2955

ant test fail for TestCrcCorruption with OutofMemory.

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Closed
    • Blocker
    • Resolution: Fixed
    • 0.17.0
    • 0.17.0
    • None
    • None

    Description

      TestCrcCorruption sometimes corrupts the metadata for crc and leads to corruption in the length of of bytes of checksum (second field in metadata). This does not happen always but somtimes since corruption is random in the test.

      I put in a debug statement in the allocation to see how many bytes were being allocated and ran it for few times. This is one of the allocation in
      BlockSender:sendBlock()

      int maxChunksPerPacket = Math.max(1,
      (BUFFER_SIZE + bytesPerChecksum - 1)/bytesPerChecksum);
      int sizeofPacket = PKT_HEADER_LEN +
      (bytesPerChecksum + checksumSize) * maxChunksPerPacket;
      LOG.info("Comment: bytes to allocate " + sizeofPacket);
      ByteBuffer pktBuf = ByteBuffer.allocate(sizeofPacket);

      The output in one of the allocations is

      dfs.DataNode (DataNode.java:sendBlock(1766)) - Comment: bytes to allocate 1232596786

      So we should check for number of bytes being allocated in sendBlock (should be less than the block size? – seems like a good default).

      Attachments

        1. HADOOP-2955.java
          0.9 kB
          Raghu Angadi
        2. HADOOP-2955.patch
          0.9 kB
          Raghu Angadi

        Activity

          People

            rangadi Raghu Angadi
            mahadev Mahadev Konar
            Votes:
            0 Vote for this issue
            Watchers:
            0 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: