Details

    • Type: Bug Bug
    • Status: Resolved
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 0.23.1, 2.0.0-alpha
    • Fix Version/s: 2.0.0-alpha
    • Component/s: datanode
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      STEP:
      1, deploy a single node hdfs 0.23.1 cluster and configure hdfs as:
      A) enable webhdfs
      B) enable append
      C) disable permissions
      2, start hdfs
      3, run the test script as attached

      RESULT:
      expected: a file named testFile should be created and populated with 32K * 5000 zeros, HDFS should be OK.
      I got: script cannot be finished, file has been created but not be populated as expected, actually append operation failed.

      Datanode log shows that, blockscaner report a bad replica and nanenode decide to delete it. Since it is a single node cluster, append fail. It makes no sense that the script failed every time.

      Datanode and Namenode logs are attached.

      1. testAppend.patch
        3 kB
        Tsz Wo Nicholas Sze
      2. test.sh
        0.4 kB
        Zhanwei Wang
      3. HDFS-3100.patch
        6 kB
        Brandon Li
      4. HDFS-3100.patch
        7 kB
        Brandon Li
      5. HDFS-3100.patch
        7 kB
        Brandon Li
      6. HDFS-3100.patch
        6 kB
        Brandon Li
      7. HDFS-3100.patch
        6 kB
        Brandon Li
      8. hadoop-wangzw-namenode-ubuntu.log
        797 kB
        Zhanwei Wang
      9. hadoop-wangzw-datanode-ubuntu.log
        782 kB
        Zhanwei Wang

        Issue Links

          Activity

            People

            • Assignee:
              Brandon Li
              Reporter:
              Zhanwei Wang
            • Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development