Brahma Reddy Battula sorry i didn't make myself clear.
To begin with, this behavior was caused by
HDFS-8492, which throws FileNotFoundException("BlockId " + blockId + " is not valid.").
I was just thinking that "Too many open files" error is thrown within Java library, so there's no guarantee this would be compatible between different operating systems, or across different Java versions, or different JVM/JDK implementation.
IMHO, the more compatible approach would be that we check if FNFE has "BlockId " + blockId + " is not valid.", and only delete the block when that's the case.
HDFS-3100 throws FileNotFoundException("Meta-data not found for " + block) when meta file checksum is not found. So this should be checked as well.
Or, it should just throw a new type of exception in these two cases.