Details
-
Improvement
-
Status: Resolved
-
Minor
-
Resolution: Fixed
-
None
-
None
-
Reviewed
Description
Currently if there are less DataNodes than the erasure coding policy's (# of data blocks + # of parity blocks), the client sees this:
09:18:24 17/12/14 09:18:24 WARN hdfs.DFSOutputStream: Cannot allocate parity block(index=13, policy=RS-10-4-1024k). Not enough datanodes? Exclude nodes=[] 09:18:24 17/12/14 09:18:24 WARN hdfs.DFSOutputStream: Block group <1> has 1 corrupt blocks.
The 1st line is good. The 2nd line may be confusing to end users. We should investigate the error and be more general / accurate. Maybe something like 'failed to read x blocks'.