Details
-
Improvement
-
Status: Closed
-
Minor
-
Resolution: Fixed
-
2.1.0-beta
-
None
-
None
Description
Currently the RPC server just allocates however much memory the client asks for, without validating. It would be nice to make the maximum RPC buffer size configurable. This would prevent a rogue client from bringing down the NameNode (or other Hadoop daemon) with a few requests for 2 GB buffers. It would also make it easier to debug issues with super-large RPCs or malformed headers, since OOMs can be difficult for developers to reproduce.
Attachments
Attachments
Issue Links
- is related to
-
HDFS-10593 MAX_DIR_ITEMS should not be hard coded since RPC buff size is configurable
- Resolved
- relates to
-
HDFS-4940 namenode OOMs under Bigtop's TestCLI
- Closed
-
HADOOP-13039 Add documentation for configuration property ipc.maximum.data.length for controlling maximum RPC message size.
- Closed