-
Type:
Sub-task
-
Status: Resolved
-
Priority:
Major
-
Resolution: Fixed
-
Affects Version/s: 3.0.0-alpha4
-
Fix Version/s: 3.0.0-beta1
-
Component/s: rolling upgrades
-
Labels:None
-
Target Version/s:
Trying to write to a Hadoop 3 DataNode with a Hadoop 2 client currently fails. On the client side it looks like this:
17/07/14 13:31:58 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.io.EOFException: Premature EOF: no length prefix available at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2280) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1318) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1237) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:449)
But on the DataNode side there's an ArrayOutOfBoundsException because there aren't any targetStorageIds:
java.lang.ArrayIndexOutOfBoundsException: 0 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:815) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:173) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:107) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:745)
- is broken by
-
HDFS-9807 Add an optional StorageID to writes
-
- Resolved
-
- is related to
-
HDFS-12207 A few DataXceiver#writeBlock cleanups related to optional storage IDs and types
-
- Open
-