Hadoop HDFS
  1. Hadoop HDFS
  2. HDFS-1770

TestFiRename fails due to invalid block size

    Details

    • Type: Test Test
    • Status: Closed
    • Priority: Minor Minor
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 0.23.0
    • Component/s: None
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      HDFS-1763 exposed a bug in TestFiRename or HDFS (see HADOOP-70800) which fails due to the following:

      Internal error: default blockSize is not a multiple of default bytesPerChecksum
      java.io.IOException: Internal error: default blockSize is not a multiple of default bytesPerChecksum

      Previously this test passed because it used dfs.block.size (instead of dfs.blocksize), though the behavior should be equivalent since on deprecates the other.

      1. hdfs-1770-1.patch
        0.5 kB
        Eli Collins

        Issue Links

          Activity

          No work has yet been logged on this issue.

            People

            • Assignee:
              Eli Collins
              Reporter:
              Eli Collins
            • Votes:
              0 Vote for this issue
              Watchers:
              0 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development