Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-15719

[Hadoop 3] Both NameNodes can crash simultaneously due to the short JN socket timeout

    XMLWordPrintableJSON

Details

    • Incompatible change
    • Hide
      The default value of the configuration hadoop.http.idle_timeout.ms (how long does Jetty disconnect an idle connection) is changed from 10000 to 60000.
      This property is inlined during compile time, so an application that references this property must be recompiled in order for it to take effect.
      Show
      The default value of the configuration hadoop.http.idle_timeout.ms (how long does Jetty disconnect an idle connection) is changed from 10000 to 60000. This property is inlined during compile time, so an application that references this property must be recompiled in order for it to take effect.

    Description

      After Hadoop 3, we migrated Jetty 6 to Jetty 9. It was implemented in HADOOP-10075.

      However, HADOOP-10075 erroneously set the HttpServer2 socket idle timeout too low.
      We replaced SelectChannelConnector.setLowResourceMaxIdleTime() with ServerConnector.setIdleTimeout() but they aren't the same.

      Essentially, the HttpServer2's idle timeout was the default timeout set by Jetty 6, which is 200 seconds. After Hadoop 3, the idle timeout is set to 10 seconds, which is unreasonable for JN. If NameNodes try to download a big edit log from JournalNodes (say a few hundred MB), it is likely to exceed 10 seconds. When it happens, both NN crashes and there's no way to workaround unless you apply the patch in HADOOP-15696 to add a config switch for the idle timeout. Fortunately, it doesn't happen a lot.

      Propose: bump the idle timeout default to 200 seconds to match the behavior in Jetty 6. (Jetty 9 reduces the default idle timeout to 30 seconds, which is not suitable for JN)

      Other things to consider:
      1. fsck serverlet? (somehow I suspect this is related to the socket timeout reported in HDFS-7175)
      2. webhdfs, httpfs? --> we've also received reports that webhdfs can timeout. so having a longer timeout makes sense here.
      2. kms? will the longer timeout cause more lingering sockets?

      Thanks zhenshan.wen for the discussion.

      Attachments

        Issue Links

          Activity

            People

              weichiu Wei-Chiu Chuang
              weichiu Wei-Chiu Chuang
              Votes:
              0 Vote for this issue
              Watchers:
              6 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved:

                Time Tracking

                  Estimated:
                  Original Estimate - Not Specified
                  Not Specified
                  Remaining:
                  Remaining Estimate - 0h
                  0h
                  Logged:
                  Time Spent - 1.5h
                  1.5h