Hadoop Common
  1. Hadoop Common
  2. HADOOP-4533

HDFS client of hadoop 0.18.1 and HDFS server 0.18.2 (0.18 branch) not compatible

    Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Blocker Blocker
    • Resolution: Fixed
    • Affects Version/s: 0.18.1
    • Fix Version/s: 0.18.2
    • Component/s: None
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      Not sure whether this is considered as a bug or is an expected case.
      But here are the details.

      I have a cluster using a build from hadoop 0.18 branch.
      When I tried to use hadoop 0.18.1 dfs client to load files to it, I got the following exceptions:

      hadoop --config ~/test dfs -copyFromLocal gridmix-env /tmp/.
      08/10/28 16:23:00 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
      08/10/28 16:23:00 INFO dfs.DFSClient: Abandoning block blk_-439926292663595928_1002
      08/10/28 16:23:06 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
      08/10/28 16:23:06 INFO dfs.DFSClient: Abandoning block blk_5160335053668168134_1002
      08/10/28 16:23:12 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
      08/10/28 16:23:12 INFO dfs.DFSClient: Abandoning block blk_4168253465442802441_1002
      08/10/28 16:23:18 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
      08/10/28 16:23:18 INFO dfs.DFSClient: Abandoning block blk_-2631672044886706846_1002
      08/10/28 16:23:24 WARN dfs.DFSClient: DataStreamer Exception: java.io.IOException: Unable to create new block.
      at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2349)
      at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1800(DFSClient.java:1735)
      at org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1912)

      08/10/28 16:23:24 WARN dfs.DFSClient: Error Recovery for block blk_-2631672044886706846_1002 bad datanode[0]
      copyFromLocal: Could not get block locations. Aborting...
      Exception closing file /tmp/gridmix-env
      java.io.IOException: Could not get block locations. Aborting...
      at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2143)
      at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1400(DFSClient.java:1735)
      at org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1889)

      This problem has a severe impact on Pig 2.0, since it is pre-packaged with hadoop 0.18.1 and will use
      Hadoop 0.18.1 dfs client in its interaction with hadoop cluster.
      That means that Pig 2.0 will not work with the to be released hadoop 0.18.2

        Issue Links

          Activity

          Owen O'Malley made changes -
          Component/s dfs [ 12310710 ]
          Nigel Daley made changes -
          Status Resolved [ 5 ] Closed [ 6 ]
          Hairong Kuang made changes -
          Status Open [ 1 ] Resolved [ 5 ]
          Hadoop Flags [Reviewed]
          Resolution Fixed [ 1 ]
          Tsz Wo Nicholas Sze made changes -
          Link This issue relates to HADOOP-4538 [ HADOOP-4538 ]
          Robert Chansler made changes -
          Priority Major [ 3 ] Blocker [ 1 ]
          Fix Version/s 0.18.2 [ 12313424 ]
          Hairong Kuang made changes -
          Attachment balancerRM_br18.patch [ 12392954 ]
          Hairong Kuang made changes -
          Attachment balancerRM-b18.patch [ 12392952 ]
          Hairong Kuang made changes -
          Attachment balancerRM-b18.patch [ 12392952 ]
          Owen O'Malley made changes -
          Assignee Hairong Kuang [ hairong ]
          Runping Qi made changes -
          Field Original Value New Value
          Component/s dfs [ 12310710 ]
          Description
          Not sure whether this is considered as a bug or is an expected case.
          But here are the details.

          I have a cluster using a build from hadoop 0.18 branch.
          When I tried to use hadoop 0.18.1 dfs client to load files to it, I got the following exceptions:

          hadoop --config ~/test dfs -copyFromLocal gridmix-env /tmp/.
          08/10/28 16:23:00 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
          08/10/28 16:23:00 INFO dfs.DFSClient: Abandoning block blk_-439926292663595928_1002
          08/10/28 16:23:06 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
          08/10/28 16:23:06 INFO dfs.DFSClient: Abandoning block blk_5160335053668168134_1002
          08/10/28 16:23:12 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
          08/10/28 16:23:12 INFO dfs.DFSClient: Abandoning block blk_4168253465442802441_1002
          08/10/28 16:23:18 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
          08/10/28 16:23:18 INFO dfs.DFSClient: Abandoning block blk_-2631672044886706846_1002
          08/10/28 16:23:24 WARN dfs.DFSClient: DataStreamer Exception: java.io.IOException: Unable to create new block.
          at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2349)
          at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1800(DFSClient.java:1735)
          at org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1912)

          08/10/28 16:23:24 WARN dfs.DFSClient: Error Recovery for block blk_-2631672044886706846_1002 bad datanode[0]
          copyFromLocal: Could not get block locations. Aborting...
          Exception closing file /tmp/gridmix-env
          java.io.IOException: Could not get block locations. Aborting...
          at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2143)
          at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1400(DFSClient.java:1735)
          at org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1889)

          This problem has a severe impact on Pig 2.0, since it is pre-packaged with hadoop 0.18.1 and will use
          Hadoop 0.18.1 dfs client in its interaction with hadoop cluster.
          That means that Pig 2.0 will not work with the to be released hadoop 0.18.2


          Not sure whether this is considered as a bug or is an expected case.
          But here are the details.

          I have a cluster using a build from hadoop 0.18 branch.
          When I tried to use hadoop 0.18.1 dfs client to load files to it, I got the following exceptions:

          hadoop --config ~/test dfs -copyFromLocal gridmix-env /tmp/.
          08/10/28 16:23:00 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
          08/10/28 16:23:00 INFO dfs.DFSClient: Abandoning block blk_-439926292663595928_1002
          08/10/28 16:23:06 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
          08/10/28 16:23:06 INFO dfs.DFSClient: Abandoning block blk_5160335053668168134_1002
          08/10/28 16:23:12 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
          08/10/28 16:23:12 INFO dfs.DFSClient: Abandoning block blk_4168253465442802441_1002
          08/10/28 16:23:18 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Could not read from stream
          08/10/28 16:23:18 INFO dfs.DFSClient: Abandoning block blk_-2631672044886706846_1002
          08/10/28 16:23:24 WARN dfs.DFSClient: DataStreamer Exception: java.io.IOException: Unable to create new block.
          at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2349)
          at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1800(DFSClient.java:1735)
          at org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1912)

          08/10/28 16:23:24 WARN dfs.DFSClient: Error Recovery for block blk_-2631672044886706846_1002 bad datanode[0]
          copyFromLocal: Could not get block locations. Aborting...
          Exception closing file /tmp/gridmix-env
          java.io.IOException: Could not get block locations. Aborting...
          at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2143)
          at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1400(DFSClient.java:1735)
          at org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1889)

          This problem has a severe impact on Pig 2.0, since it is pre-packaged with hadoop 0.18.1 and will use
          Hadoop 0.18.1 dfs client in its interaction with hadoop cluster.
          That means that Pig 2.0 will not work with the to be released hadoop 0.18.2


          Runping Qi created issue -

            People

            • Assignee:
              Hairong Kuang
              Reporter:
              Runping Qi
            • Votes:
              0 Vote for this issue
              Watchers:
              1 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development