Details
-
Bug
-
Status: Closed
-
Minor
-
Resolution: Fixed
-
1.0.4, 2.0.4-alpha
-
None
-
Incompatible change, Reviewed
-
Description
We recently had an issue where a user set the block size very very low and managed to create a single file with hundreds of thousands of blocks. This caused problems with the edit log since the OP_ADD op was so large (HDFS-4304). I imagine it could also cause efficiency issues in the NN. To prevent users from making such mistakes, we should:
- introduce a configurable minimum block size, below which requests are rejected
- introduce a configurable maximum number of blocks per file, above which requests to add another block are rejected (with a suitably high default as to not prevent legitimate large files)
Attachments
Attachments
Issue Links
- is duplicated by
-
HDFS-5463 NameNode should limit the number of blocks per file
- Resolved
- is related to
-
ACCUMULO-2266 TServer should ensure wal settings are valid for underlying FS
- Resolved