Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
0.20.1, 0.20.2
-
None
-
Reviewed
Description
After upgrading to that latest HDFS 0.20.2 (r896310 from /branches/branch-0.20), old DFS clients (0.20.1) seem to not work anymore. HBase uses the 0.20.1 hadoop core jars and the HBase master will no longer startup. Here is the exception from the HBase master log:
2010-01-06 09:59:46,762 WARN org.apache.hadoop.hdfs.DFSClient: DFS Read: java.io.IOException: Could not obtain block: blk_338051 2596555557728_1002 file=/hbase/hbase.version at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1788) at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1616) at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1743) at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1673) at java.io.DataInputStream.readUnsignedShort(DataInputStream.java:320) at java.io.DataInputStream.readUTF(DataInputStream.java:572) at org.apache.hadoop.hbase.util.FSUtils.getVersion(FSUtils.java:189) at org.apache.hadoop.hbase.util.FSUtils.checkVersion(FSUtils.java:208) at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:208) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.hbase.master.HMaster.doMain(HMaster.java:1241) at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:1282) 2010-01-06 09:59:46,763 FATAL org.apache.hadoop.hbase.master.HMaster: Not starting HMaster because: java.io.IOException: Could not obtain block: blk_3380512596555557728_1002 file=/hbase/hbase.version at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1788) at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1616) at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1743) at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1673) at java.io.DataInputStream.readUnsignedShort(DataInputStream.java:320) at java.io.DataInputStream.readUTF(DataInputStream.java:572) at org.apache.hadoop.hbase.util.FSUtils.getVersion(FSUtils.java:189) at org.apache.hadoop.hbase.util.FSUtils.checkVersion(FSUtils.java:208) at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:208) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.hbase.master.HMaster.doMain(HMaster.java:1241) at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:1282)
If I switch the hadoop jars in the hbase/lib directory with 0.20.2 version it works well, which what led me to open this bug here and not in the HBASE project.
Attachments
Attachments
Issue Links
- is related to
-
HDFS-793 DataNode should first receive the whole packet ack message before it constructs and sends its own ack message for the packet
- Closed