Details
-
Sub-task
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
None
-
None
-
None
-
Reviewed
Description
Playing with the branch, found below:
java.io.EOFException: Premature EOF: no length prefix available at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2343) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:244) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:823) 2015-04-23 23:21:08,666 INFO mortbay.log (Slf4jLog.java:info(67)) - Stopped SelectChannelConnector@localhost:0 2015-04-23 23:21:08,769 INFO ipc.Server (Server.java:stop(2540)) - Stopping server on 57920 2015-04-23 23:21:08,770 INFO datanode.DataNode (BlockReceiver.java:receiveBlock(826)) - Exception for BP-1850767374-10.239.12.51-1429802363548:blk_-9223372036854775737_1007 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:201) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:472) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:787) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:793) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:250) at java.lang.Thread.run(Thread.java:745) 2015-04-23 23:21:08,769 INFO datanode.DataNode (BlockReceiver.java:run(1250)) - PacketResponder: BP-1850767374-10.239.12.51-1429802363548:blk_-9223372036854775737_1007, type=LAST_IN_PIPELINE, downstreams=0:[]: Thread is interrupted. 2015-04-23 23:21:08,776 WARN datanode.DataNode (BPServiceActor.java:offerService(756)) - BPOfferService for Block pool BP-1850767374-10.239.12.51-1429802363548 (Datanode Uuid 72b12e39-77cb-463d-a919-0ac06d166fcd) service to localhost/127.0.0.1:36877 interrupted 2015-04-23 23:21:08,776 INFO ipc.Server (Server.java:run(846)) - Stopping IPC Server Responder
libo-intel], would you help with this?
Attachments
Attachments
Issue Links
- is duplicated by
-
HDFS-8239 Erasure coding: [bug] should always allocate unique striped block group IDs
- Resolved