If a DFS client closes a file while the last block is being decommissioned, the close() may fail if the decommission of the block does not complete in a few seconds.
When a DataNode is being decommissioned, NameNode marks the DN's state as DECOMMISSION_INPROGRESS_INPROGRESS, and blocks with replicas on these DataNodes become under-replicated immediately. A close() call which attempts to complete the last open block will fail if the number of live replicas is below minimal replicated factor, due to too many replicas residing on the DataNodes.
The client internally will try to complete the last open block for up to 5 times by default, which is roughly 12 seconds. After that, close() throws an exception like the following, which is typically not handled properly.
Once the exception is thrown, the client usually does not attempt to close again, so the file remains in open state, and the last block remains in under replicated state.
Subsequently, administrator runs recoverLease tool to salvage the file, but the attempt failed because the block remains in under replicated state. It is not clear why the block is never replicated though. However, administrators think it becomes a corrupt file because the file remains open via fsck -openforwrite and the file modification time is hours ago.
In summary, I do not think close() should fail because the last block is being decommissioned. The block has sufficient number replicas, and it's just that some replicas are being decommissioned. Decomm should be transparent to clients.
This issue seems to be more prominent on a very large scale cluster, with min replication factor set to 2.