Details
-
Bug
-
Status: Open
-
Blocker
-
Resolution: Unresolved
-
None
-
None
-
None
-
None
-
Incompatible change
-
The default value of "yarn.nodemanager.vmem-check-enabled" was changed to false.
Description
In our Hadoop 2 + Java8 effort , we found few jobs are being Killed by Hadoop due to excessive virtual memory allocation. Although the physical memory usage is low.
The most common error message is "Container [pid=??,containerID=container_??] is running beyond virtual memory limits. Current usage: 365.1 MB of 1 GB physical memory used; 3.2 GB of 2.1 GB virtual memory used. Killing container."
We see this problem for MR job as well as in spark driver/executor.
Attachments
Attachments
Issue Links
- is related to
-
YARN-2225 Turn the virtual memory check to be off by default
- Resolved
- relates to
-
HADOOP-11090 [Umbrella] Support Java 8 in Hadoop
- Resolved
- links to