Details
Description
When NN is not able to identify DN for replication, reason behind it can be logged (at least critical information why DNs not chosen like disk is full). At present it is expected to enable debug log.
For example the reason for below error looks like all 7 DNs are busy for data writes. But at client or NN side no hint is given in the log message.
File /tmp/logs/spark/logs/application_1437051383180_0610/xyz-195_26009.tmp could only be replicated to 0 nodes instead of minReplication (=1). There are 7 datanode(s) running and no node(s) are excluded in this operation. at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1553)
Attachments
Attachments
Issue Links
- duplicates
-
HDFS-12726 BlockPlacementPolicyDefault's debugLoggingBuilder may not be logged
- Resolved