Details
Description
We recently found KMS performance regressed in Hadoop 3.0, possibly linking to the migration from Tomcat to Jetty inĀ HADOOP-13597.
Symptoms:
- Hadoop 3.x KMS open file descriptors quickly rises to more than 10 thousand under stress, sometimes even exceeds 32K, which is the system limit, causing failures for any access to encryption zones. Our internal testing shows the openfd number was in the range of a few hundred in Hadoop 2.x, and it increases by almost 100x in Hadoop 3.
- Hadoop 3.x KMS as much as twice the heap size than in Hadoop 2.x. The same heap size can go OOM in Hadoop 3.x. Jxray analysis suggests most of them are temporary byte arrays associated with open SSL connections.
- Due to the heap usage, Hadoop 3.x KMS has more frequent GC activities, and we observed up to 20% performance reduction due to GC.
A possible solution is to reduce the idle timeout setting in HttpServer2. It is currently hard-coded 10 seconds. By setting it to 1 second, open fds dropped from 20 thousand down to 3 thousand in my experiment.
File this jira to invite open discussion for a solution.
Credit: misha@cloudera.com for the proposed Jetty idle timeout remedy; xiaochen for digging into this problem.
Screenshots:
CDH5 (Hadoop 2) KMS CPU utilization, resident memory and file descriptor chart.
CDH6 (Hadoop 3) KMS CPU utilization, resident memory and file descriptor chart.
CDH5 (Hadoop 2) GC activities on the KMS process
CDH6 (Hadoop 3) GC activities on the KMS process
JXray report
open fd drops from 20 k down to 3k after the proposed change.
Attachments
Attachments
Issue Links
- causes
-
HDFS-15719 [Hadoop 3] Both NameNodes can crash simultaneously due to the short JN socket timeout
- Resolved
- is broken by
-
HADOOP-13597 Switch KMS from Tomcat to Jetty
- Resolved
- is related to
-
HADOOP-15743 Jetty and SSL tunings to stabilize KMS performance
- Open