Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
3.0.0-beta1
Description
Found while testing with Hive. We have a cluster with 2 DNs and the XOR-2-1 policy. If you write a file and call close() twice, it throws this exception:
17/10/04 16:02:14 WARN hdfs.DFSOutputStream: Cannot allocate parity block(index=2, policy=XOR-2-1-1024k). Not enough datanodes? Exclude nodes=[] ... Caused by: java.io.IOException: Failed to get parity block, index=2 at org.apache.hadoop.hdfs.DFSStripedOutputStream.allocateNewBlock(DFSStripedOutputStream.java:500) ~[hadoop-hdfs-client-3.0.0-alpha3-cdh6.x-SNAPSHOT.jar:?] at org.apache.hadoop.hdfs.DFSStripedOutputStream.writeChunk(DFSStripedOutputStream.java:524) ~[hadoop-hdfs-client-3.0.0-alpha3-cdh6.x-SNAPSHOT.jar:?]
This is because in DFSStripedOutputStream#closeImpl, if the stream is closed, we throw an exception if any of the striped streamers had an exception:
protected synchronized void closeImpl() throws IOException { if (isClosed()) { final MultipleIOException.Builder b = new MultipleIOException.Builder(); for(int i = 0; i < streamers.size(); i++) { final StripedDataStreamer si = getStripedDataStreamer(i); try { si.getLastException().check(true); } catch (IOException e) { b.add(e); } } final IOException ioe = b.build(); if (ioe != null) { throw ioe; } return; }
I think this is incorrect, since we only need to throw in this situation if we have too many failed streamers. close should also be idempotent, so it should throw the first time we call close if it's going to throw at all.
Attachments
Attachments
Issue Links
- is related to
-
HDFS-12613 Native EC coder should implement release() as idempotent function.
- Resolved